服务承诺





51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。




私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展




Consequences for the development of artificial social agents--论文代写范文精选
2016-03-09 来源: 51due教员组 类别: Essay范文
随着时间的推移,避免以后推断出,在实践中需要很多的社会互动模型,作为其代理人的基础,只要没那么容易出错,启发式结果仅仅是在这样一个环境中,下面的essay代写范文进行详述。
Abstract
If the above is the case and important aspects of our social intelligence require to be socially situated for their complete development, then this has consequences for programmers who are trying to construct or model such agents. Generally such a process of construction happens separately from the social situation that the agents are to inhabit – the programmer has a goal or specification in mind, tries to implement the agent to meet these and later the agents are situated so as to interact with others. Whether this is possible to do d epends on the e xtent t o which the a spects of its intelligence a re practically abstractable to a model which is analysable into two (essentially) unitary parts: the agent and its environment.
If this can be done then one can indeed d esign the agent with this environment i n mind. In this case the social environment is effectively modellable from the agent’s point of view. If this sort of process is impractical (e.g. all the interactions in the social environment actually matter to the a gent) this corresponds to a situation in which the a gent is socially embedded (see section 5 .5). Here the a gent cannot m odel it s s ocial environment as a whole a nd thus is forced to evolve heuristics based on the individuals it knows about i n that enivronment. Some of these heristics are listed below. There a re a number of possible responses to inhabiting such a social environment, including:
• Developing ways of structuring relationships to make them m ore reliable/predictable, e.g. contracts and friendship networks;
• Developing constraints on no rmal social behaviour via social norms and enforceable laws [5];
• Developing institutions and groupings that act to ‘filter out’ the complexity of the exterior social environment [2];
• To try and indentify good sources of information and opinion and rely on these as a basis for decision making;
• To imitate those agents who many others imitate;
• To frequently sample the social environnment via ‘gossip’;
• and, finally, to d evelop on es heuristics over time from within the relevant society and so avoid having to infer them later
In practice many models of socially interacting agents take one (or a limited selection of) these heuristics as the basis for their agents. This is fine as long as one does not then make the false step of defining a social agent on the basis of one such heuristic. It is likely that intelligent and creative social agents that co-evolve within a society of other such agent that are individually recognisable will develop a many separate and different heuristics [18]. The heuristics are merely a result of being such an agent in such an environment. This leads us to b elieve that a bottom-up 5 (or constructivist [17, 19 , 22 , 37 , 40 ]) approach may b e more profitable to a top-down a priori approach.
Challenges in SSI research
This s ection ou tlines a few research topics which we c onsider important t o SSI research and which have in our view not yet gained as much attention as they deserve in the current research landscape. The list is not meant to be complete.
Culturally Situated Agents
The intelligent agents community which consists of people building software or hardware a gents, or modelling societies of agents which show certain (social) intelligence, has s o far not paid much attention to the issue that all t echnological products reflect t he culture from which they o riginate. In the following we like to consider autonomous agents, following the definition given by Franklin and Graesser [21]: “An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”. Currently, a paradigm shift from algorithms to interaction is acknowledged, see [41] which argues that recent technology is more than a continued development of more and more powerful rulebased Turing machines based on the c losed-system m etaphor. Instead, interactive systems, interaction machines which are inherently open systems are supposed to be the computational paradigm of the future.
The shift of attention from algorithms to interaction also indicates a shift in beliefs, from the belief to discover and implement a universally intelligent, and g eneral purpose machine towards an interactive machine, i.e. an agent which is not intelligent by itself by only behaves intelligently during interaction with other agents of the same or different kind, e.g. which interact with hu mans.Thus, the (social) context strongly matters, and in the ca se of interactions with humans s uch a socially situated agent which is used in d ifferent countries and communities also h as to b e a c ulturally situated agent [34] and a PRICAI98 workshop which addresses this topic 6 . We cannot expect that agents, both natural and artificial, behave identically in d ifferent social and cultural contexts. Thus, design and evaluation of agents could benefit from considering these issues.
Imitation and the ‘like-me’ test
A w orkshop at t he latest Autonomous Agents AA’98 conference c haracterized imitation as follows: “Imitation is supposed to be among the least common and most complex forms of animal learning”. It is found in highly socially living species which show, from a human observer point of view, ‘intelligent’ behaviour and signs for the evolution of traditions and culture. There is strong evidence for imitation in certain primates (humans and chimpanzees), cetaceans (whales and do lphins) and specific birds like parrots.
Recently, imitation has begun to be studied in domains dealing with such non-natural agents as robots, as a tool for easing the programming of complex tasks or endowing g roups of r obots with the a bility to share skills without t he intervention of a programmer. Imitation plays an important role in the more general context of interaction and collaboration b etween agents and hu mans,e.g. between software a gents and human u sers. Intelligent software a gents need to g et t o know their users in order to assist them and do their work on behalf of humans. Imitation is therefore a means of establishing a ‘social relationship’ and learning about the actions of the user, in order include them into the agent’s own behavioural repertoire 7 .
Imitation is on the one hand considered an efficient mechanism of social learning, on the other hand experiments in d evelopmental psychology suggest t hat i nfants use imitation to g et t o know persons, possibly applying a ‘like-me’ test (‘persons are objects which I can imitate and which imitate me’), see discussion in [14]. Imitation (e.g. as social reinforcement techniques or programming by d emonstration setups in robotics and machine learning) has been u sed p rimarily by focusing on the ‘technological’ dimension (e.g. imitation providing the context of learning sequences of action), and d isregarded the social function o f imitation.
Additionally, the split between imitation research in natural sciences and the sciences of the a rtificial are difficult t o b ridge, we a re far fr om a c ommon research framework supporting an interdisciplinary approach toward simulation, cf. [33] f or an attempt t o p rovide a mathematical framework to facilitate analysis and evaluation o f imitation research. With an embodied system inhabiting a non-trivial environment imitation addresses all major AI problems from perception-action coupling, body-schemata, recognition and matching of movements, reactive and cognitive aspects of imitation, the development of sociality, or the notion of ‘self’, just to mention a few issues. Imitation involves at least two agents sharing a context, allowing one agent to learn from the other. The exchange of skills, knowledge, and experience between natural agents cannot be done by brain-to-brain communication in the way how computers can communicate via the internet, it its mediated v ia the body, the e nvironment, the verbal or non-verbal expression or body language of the ‘sender’, which in return has to be interpreted and integrated in the ‘listener’s’ own understanding and behavioural repertoire. And, as imitation g ames between b abies and p arents s how, the metaphor of ‘ sender’ and ‘receiver’ is deceptive, since the game emerges from the engagement of both agents in the interaction (see notions of situated activity and interactive emergence in [24].
Modelling Social Modelling
Biology rests on a very large body of observational fieldwork. This is available as a huge resource for the inspiration and verification of biological models and theories. However the “flowering” of the field that we have witnessed in the last half of this century only occurred when the some of the basic chemistry underlying biological processes had been sorted out. This body of bio-chemistry constrains and validates biological models. If we are to succeed in making sense of societies of agents and their processes we may have to undertake a similar project to provide the underpinning of our models. One of requisite foundational projects that will need to be done is the development of a set of models of how individuals actually model others and themselves. This is also a complex area but a start has been made in the area of imitation (see section 5.2 and papers [4, 15]) and evidence of what can occur in the absence of social modelling ability may be gleaned from cases of autism (section 3.5).(essay代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)
