Monday, October 3, 2011

BRIDGING SOCIALLY-ENHANCED VIRTUAL COMMUNITIES

BRIDGING SOCIALLY-ENHANCED VIRTUAL COMMUNITIES

ABSTRACT:

Interactions spanning multiple organizations have become an important aspect in today’s collaboration landscape. Organizations create alliances to fulfill strategic objectives. The dynamic nature of collaborations increasingly demands for automated techniques and algorithms to support the creation of such alliances. Our approach bases on the recommendation of potential alliances by discovery of currently relevant competence sources and the support of semi-automatic formation. The environment is service-oriented comprising humans and software services with distinct capabilities. To mediate between previously separated groups and organizations, we introduce the broker concept that bridges disconnected networks. We present a dynamic broker discovery approach based on interaction mining techniques and trust metrics.


EXISTING SYSTEM:

While existing platforms only support simple interaction models (tasks are assigned to individuals), social network principles support more advanced techniques such as formation and adaptive coordination.


PROPOSED SYSTEM:

Our approach is based on interaction mining and metrics to discover brokers suitable for connecting communities in service-oriented collaborations. The availability of rich and plentiful data on human interactions in social networks has closed an important loop, allowing one to model social phenomena and to use these models in the design of new computing applications such as crowd sourcing techniques .A wide range of computational trust models have been proposed. We focus on social trust that relies on user interests and collaboration behavior. Technically, the focus of BQDL is to provide an intuitive Mechanism for querying data from social networks. These networks are established upon mining and metrics.


MODULES:

Supporting the Formation of Expert Groups:

Successfully performed compositions of actors should not be dissolved but actively facilitated for future collaborations. Thus, tight trust relations can be dynamically converted to FOAF relations (i.e., discovery of relevant social networks)

Controlling Interactions and Delegations:

Discovery and interactions between members can be based on FOAF relations. People tend to favor requests from well-known members compared to unknown parties.

Establishment of new Social Relations:

The emergence of new personal relations is actively facilitated through brokers. The introduction of new partners through brokers (e.g., b introduces u and j to each other) leads to future trustworthy compositions.


ALGORITHM:

PAGE RANK ALGORITHM:

This can be accomplished by using eigenvector methods in social networks such as the Page Rank algorithm to establish authority scores (the importance or social standing of a node in the network) or advanced game-theoretic techniques based on the concept of structural holes.

Consider two initially disconnected communities (sets of nodes) depicted as variables var source = {n1, n2, . . . , ni} and var target = {nj , nj+1, . . . , nj+m} residing in the graph G. R1: The goal is to find a broker connecting disjoint sets of nodes (i.e., not having any direct links between each other). A1:

Two sub graphs G1 and G2 are created to determine brokers which connect the source community {u, v, w} with the target community {g, h, i}. O1: The output of the query is a list of brokers connecting {u, v, w} and {g, h, i}. Specify the input/output parameters of the query. D1: As a first step, a (sub) select is performed using the statement as shown by the lines 6-11. The statement distinct (node) means that a set of unique brokers shall be selected based on the condition denoted as the Where clause with a filter. The term ‘[1...*] n in source’.


HARDWARE REQUIRED:

System : Pentium IV 2.4 GHz

Hard Disk : 40 GB

Floppy Drive : 1.44 MB

Monitor : 15 VGA color

Mouse : Logitech.

Keyboard : 110 keys enhanced

RAM : 256 MB

SOFTWARE REQUIRED:

O/S : Windows XP.

Language : Asp.Net, c#.

Data Base : Sql Server 2005.

ADAPTIVE PROVISIONING OF HUMAN EXPERTISE IN SERVICE-ORIENTED SYSTEMS

ADAPTIVE PROVISIONING OF HUMAN EXPERTISE IN

SERVICE-ORIENTED SYSTEMS

ABSTRACT:

Web-based collaborations have become essential in today’s business environments. Due to the availability of various SOA frameworks, Web services emerged as the de facto technology to realize flexible compositions of services. While most existing work focuses on the discovery and composition of software based services, we highlight concepts for a people-centric Web. Knowledge-intensive environments clearly demand for provisioning of human expertise along with sharing of computing resources or business data through software-based services. To address these challenges, we introduce an adaptive approach allowing humans to provide their expertise through services using SOA standards, such as WSDL and SOAP. The seamless integration of humans in the SOA loop triggers numerous social implications, such as evolving expertise and drifting interests of human service providers. Here we propose a framework that is based on interaction monitoring techniques enabling adaptations in SOA-based socio-technical systems.


ARCHITECTURE:


EXISTING SYSTEM:

While most existing work focuses on the discovery and composition of software based services, we highlight concepts for a people-centric Web. Knowledge-intensive environments clearly demand for provisioning of human expertise along with sharing of computing resources or business data through software-based services.

Disadvantages:

To address these challenges, we introduce an adaptive approach allowing humans to provide their expertise through services using SOA standards, such as SOAP.


PROPOSED SYSTEM:

The seamless integration of humans in the SOA loop triggers numerous social implications, such as evolving expertise and drifting interests of human service providers. Here we propose a framework that is based on interaction monitoring techniques enabling adaptations in SOA-based socio-technical systems.

Advantages:

  • These systems are characterized by both technical and human/social aspects that are tightly bound and interconnected.
  • The technical aspects are very similar to traditional SOAs, including facilities to deploy, register and discover services, as well as to support flexible interactions.


HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENTS:

System : Pentium IV 2.4 GHz.

Hard Disk : 40 GB.

Floppy Drive : 1.44 Mb.

Monitor : 15 VGA Colour.

Mouse : Logitech.

Ram : 512 MB.

SOFTWARE REQUIREMENTS:

Operating system : Windows XP.

Coding Language : ASP.Net with C#

Data Base : SQL Server 2005


MODULE DESSCRIPTION:

COLLABORATION PARTNERS:

The demand for models to support larger-scale flexible collaborations has led to an increasing research interest in adaptation techniques to enable and optimize interactions between collaboration partners. For example, changing interests and expertise of people, evolving interaction patterns due to dynamically changing roles of collaboration partners, or evolving community structures. They provide the means to specify well-defined interfaces and let customers and collaboration partners use an organization’s resources through dedicated operations.

SERVICE INSTANCES:

The concept of personalized provisioning is enabled by creating dedicated service instances for each single customer of service providers. A standard service is instantiated and gradually customized according to a client’s requirements and a provider’s behavior. Web services can help to solve the interoperability problem by giving different applications a way to link their data. With Web services you can exchange data between different applications and different platforms.

INTERACTION MODEL:

User are not statically bound to clients but are discovered at run-time. Thus, interactions are ad-hoc and dynamically performed with often not previously known partners. In SOA, interactions are typically modeled as SOAP messages. Moreover, the document translation service might be successfully used for research papers in computer science, while it is not frequently used to translate business documents.

ADAPTATION STRATEGIES:

Client-driven interventions are the means to protect customers from unreliable services. For example, services that miss deadlines or do not respond at all for a longer time are replaced by other more reliable services in future discovery operations.

Provider-driven interventions are desired and initiated by the service owners to shield themselves from malicious clients. For instance, requests of clients performing a denial of service attack by sending multiple requests in relatively short intervals are blocked (instead of processed) by the service.

Adaptive Fault Tolerant QoS Control Algorithms for Maximizing System Lifetime of Query-Based Wireless Sensor Networks

Adaptive Fault Tolerant QoS Control Algorithms for Maximizing System Lifetime of Query-Based Wireless Sensor Networks

Abstract

Data sensing and retrieval in wireless sensor systems have a widespread application in areas such as security and surveillance monitoring, and command and control in battlefields. In query-based wireless sensor systems, a user would issue a query and expect a response to be returned within the deadline. While the use of fault tolerance mechanisms through redundancy improves query reliability in the presence of unreliable wireless communication and sensor faults, it could cause the energy of the system to be quickly depleted. Therefore, there is an inherent tradeoff between query reliability vs. energy consumption in query-based wireless sensor systems. In this paper, we develop adaptive fault tolerant quality of service (QoS) control algorithms based on hop-by-hop data delivery utilizing “source” and “path” redundancy, with the goal to satisfy application QoS requirements while prolonging the lifetime of the sensor system. We develop a mathematical model for the lifetime of the sensor system as a function of system parameters including the “source” and “path” redundancy levels utilized. We discover that there exists optimal “source” and “path” redundancy under which the lifetime of the system is maximized while satisfying application QoS requirements. Numerical data are presented and validated through extensive simulation, with physical interpretations given, to demonstrate the feasibility of our algorithm design.

Architecture

Architecture of WSN

Algorithm

1. Adaptive fault tolerant QoS control (AFTQC) algorithm:

Algorithm developed in this paper takes two forms of redundancy. The first form is path redundancy. That is, instead of using a single path to connect a source cluster to the processing center, mp disjoint paths may be used. The second is source redundancy. That is, instead of having one sensor node in a source cluster return requested sensor data, ms sensor nodes may be used to return readings to cope with data transmission and/or sensor faults. The above architecture illustrates a scenario in which mp = 2 (two paths going from the CH to the processing center) and ms = 5 (five SNs returning sensor readings to the CH).

2. Clustering Algorithm:

A clustering algorithm that aims to fairly rotate SNs to take the role of CHs has been used to organize sensors into clusters for energy conservation purposes. The function of a CH is to manage the network within the cluster, gather sensor reading data from the SNs within the cluster, and relay data in response to a query. clustering algorithm is executed during the system lifetime.

Aggregation of readings

Each cluster has a CH

Users issue queries through any CH.

CH that receives the query is called the Processing Center (PC)

Each non-CH node selects the CH candidate with the highest residual energy, sends it a cluster join message (includes the non-CH node’s location). The CH will acknowledge this message.

Randomly rotates role of CH among nodes -> nodes consume their energy evenly

Existing System:

Existing research efforts related to applying redundancy to satisfy QoS requirements in query-based WSNs fall into three categories: traditional end-to-end QoS, reliability assurance, and application specific QoS . Traditional end-to-end QoS solutions are based on the concept of end-to-end QoS requirements. The problem is that it may not be feasible to implement end-to-end QoS in WSNs due to the complexity and high cost of the protocols for resource constrained sensors.

This method does not consider the reliability issue.

Disadvantages:

1. Complexity and high cost of the protocols for resource constrained sensors

2. Does not consider the reliability issue.

3. Does not consider energy issues.

4. Data delivery such as reliability and timelines are not considered.

Proposed System:

In this paper, we develop adaptive fault tolerant quality of service (QoS) control algorithms based on hop-by-hop data delivery utilizing “source” and “path” redundancy, with the goal to satisfy application QoS requirements while prolonging the lifetime of the sensor system. We develop a mathematical model for the lifetime of the sensor system as a function of system parameters including the “source” and “path” redundancy levels utilized. We discover that there exists optimal “source” and “path” redundancy under which the lifetime of the system is maximized while satisfying application QoS requirements.

Advantages:

1. To applying redundancy to satisfy application specified reliability and timeliness requirements for query-based WSNs.

2. We develop the notion of “path” and “source” level redundancy

3. Lifetime of the system is maximized.

4. Timeliness, Multiple data delivery speed options.

5. Reliability, Multi-path forwarding.

Modules:

1. General Approach

In this paper we are also interested in applying redundancy to satisfy application specified reliability and timeliness requirements for query-based WSNs. Moreover, we aim to determine the optimal redundancy level that could satisfy QoS requirements while prolonging the lifetime of the WSN. Specifically, we develop the notion of “path” and “source” level redundancy. When given QoS requirements of a query, we identify optimal path and source redundancy such that not only QoS requirements are satisfied, but also the lifetime of the system is maximized. We develop adaptive fault tolerant QoS control (AFTQC) algorithms based on hop-by-hop data delivery to achieve the desired level of redundancy and to eliminate energy expended for maintaining routing paths in the WSN.

2. Software Fault

For source redundancy, ms SNs are used for returning sensor readings. If we consider both hardware and software failures of SNs, the system will fail if the majority of SNs does not return sensor readings (due to hardware failure), or if the majority of SNs returns sensor readings incorrectly (due to software failure). Assume that all SNs have the same software failure probability, denoted by qs. Also assume that all sensors that sense a given event make the same measurements. The probability that the majority of ms SNs failing to return sensor readings due to hardware failure, and the second expression is the probability that the majority of ms SNs returning sensor readings but no majority of them agrees on the same sensor reading as the output because of software failure.

3. Data Aggregation

The analysis performed thus far assumes that a source CH does not aggregate data. The CH may receive up to ms redundant sensor readings due to source redundancy but will just forward the first one received to the PC. Thus, the data packet size is the same. For more sophisticated scenarios, conceivably the CH could also aggregate data for query processing and the size of the aggregate packet may be larger than the average data packet size. We extend the analysis to deal with data aggregation in two ways. The first is to set a larger size for the aggregated packet that would be transmitted from a source CH to the PC. This will have the effect of favoring the use of a smaller number of redundant paths (i.e., mp) because more energy would be expended to transmit aggregate packets from the source CH to the PC. The second is for the CH to collect a majority of sensor readings from its sensors before data are aggregated and transmitted to the PC.

4. Forward Traffic

The analysis performed in the paper considers only the reserve traffic for response propagation from SNs to the PC but neglects the forward traffic for query dissemination from the sink to the CH and SNs. The reliability and energy consumption of the forward traffic due to hop-by-hop query delivery can be calculated by following a similar analysis as for the reverse traffic. The success probability (Rq) would be adjusted by considering the forward traffic and reverse traffic together as a series system. The energy consumption of a query (Eq) would be used to calculate the maximum number of queries the system can possibly process.

HARDWARE & SOFTWARE REQUIREMENTS:

HARDWARE REQUIREMENTS:

· System : Pentium IV 2.4 GHz.

· Hard Disk : 40 GB.

· Floppy Drive : 1.44 Mb.

· Monitor : 15 VGA Color.

· Mouse : Logitech.

· Ram : 512 MB.

SOFTWARE REQUIREMENTS:

· Operating system : Windows XP Professional.

· Coding Language : C#.NET