服务承诺





51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。




私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展




Objectification as an emergent feature of conceptual metaphorization--论文代写范文精选
2016-02-25 来源: 51due教员组 类别: Essay范文
特别是其兼容性与联结主义的认知模型。为了把它与认知语义区分,我们考虑两个截然不同的理论,对于联结的认知理论,神经网络可用于进一步理解思想的构造,可以追溯到认知计算方法。下面的essay代写范文继续阐述。
Introduction
The purpose of the first two theoretical chapters of this thesis was to demonstrate that existing accounts of mental representation have left many unanswered questions regarding abstract concept understanding. I have shown that the distinction between abstract and concrete concepts, although operationalised in a variety of experiments, is not based on a set of objective criteria. We have considered potential grounds for this distinction. Following Szwedek's Objectification Theory (Szwedek 2000a, 2002, 2008, 2011) tangibility was identified as a potentially valid abstract/concrete distinction criterion. We have also considered the plausibility of OT as a conceptualisation model and research framework.
Objectification Theory as an improvement over Conceptual Metaphor Theory appears to be consistent with both theoretical accounts of mental representation (Ritchie 2003; Martin 2007) and experimental research (Della Rosa et al. 2010; Casasanto et al. 2001). Furthermore, OT has been shown to increase the predictive and explanatory power of CMT as an account of conceptualisation. The present chapter focuses on assessing plausibility of Objectification Theory in the context of research on abstract concept creation, in particular its compatibility with connectionist cognitive models. In order to place it in the more general framework of cognitive semantics we consider two contrasting implementations of the theory: objectification as an emergent feature and as a process.
Connectionist models in cognitive theorizing The idea that neural networks can be used to further the understanding of the mind dates back to cognitive connectionism12 , a computational modelling approach to cognition. Network models of cognitive functions reoriented the study of natural semantics and conceptualisation, becoming a major step in rethinking the nature of concepts. The notion of concept learning in humans has been revolutionized by neural networks that showed learning is possible in the absence of negative examples (Regier 1996), complex rules can be learned on the basis of simple premises (Elman et al. eds. 1998), and that simple networks (perceptrons) learning to categorise patterns arrange them into “concepts” with prototypical structure resembling that hypothesised by Rosch (1999).
By demonstrating that abstract symbols and explicit rules are not necessary for higherlevel cognitive processing connectionist models have been instrumental in undermining classical theory of mental representation (Markman and Dietrich 2000a). Although neural network models do not claim to reflect actual brain architecture, they try to emulate its computational properties and structural constraints, often serving as adequate analogies of the cognitive processes they perform (Westermann et al. 2006). Taking into account the relationship between neural architecture and brain function, connectionism attempts to shed light on the mind. For instance, Regier's model of spatial language learning (1996) based on the principles of cognitive semantics (Brugman 1990; Lakoff 1987; Talmy 1983) learned spatial terms from a variety of natural languages through a set of videos that show objects in different spatial relations, and display the names of those arrangement. For instance, one object hovering over another would be accompanied by the word “above”.
The network learned those spatial relations and their descriptors, and demonstrated its knowledge by naming relations shown in an unfamiliar set of videos. The model is a structured connectionist network based partly on cognitive semantic research on concepts, and partly on the mechanisms of human visual perception. Regier's study is of tremendous importance for cognitive science because it demonstrated that even complex conceptual operations can be learned on a purely neural and cognitive basis without the necessity for explicitly stated rules or abstract symbols.
Conceptual Metaphor Theory revealed that abstract relations are not merely used to reason about space, but constitute a vital part of abstract reasoning through metaphoric mappings (Talmy 1983). Regier's model shows that spatial relations can be learned without recourse to rules and symbols. CMT suggests that those representations 89 are employed for abstract reasoning, effectively dismantling Markman and Dietrich's (2000b) argument that amodal concepts are prerequisites of abstract thinking. Clearly, there are circumstances that make CMT and connectionist modelling great allies in the quest for understanding abstract conceptualisation. Although constrained neural networks are usually motivated by neurological and psychological data regarding brain behaviour and structure, they are not meant as simplified replicas of the brain. Even such relatively well researched brain mechanisms as visual perception are far too complex to be replicated in this manner (Tadeusiewicz 1974).
The main aim of neural networks is explanatory. Connectionist models are constructed to shed light on a given cognitive process, and should be considered analogies or approximations (Duch 2009) of brain states rather than attempts to replicate brain structure. In computational cognitive modelling insight is gathered from instances where the model performs successfully and, more importantly, when it makes errors. A successful model in this sense is not one that outperforms its human equivalent, but rather one that performs on a similar level of accuracy, and makes similar types of errors. For instance, Elman (1990) designed a network that had the task of predicting the next phoneme in a string of sounds constituting a grammatical sentence. The network was fed a set of sentences in order to determine the statistical likelihood of a phoneme appearing in a given context.
The learning algorithm then used the difference between the predicted phoneme and the actual sound to improve the accuracy of further predictions. In the course of the experiment the network learned to accurately predict sounds. In addition, it began to identify word boundaries. Perhaps the most interesting “side effect” of the experiment was that in identifying boundaries between words the model made erroneous guesses remarkably similar to those made by young children learning to speak. The model separated sequences of sounds into non-words and articles, making mistakes commonly seen in children's language, for instance “a nelephant” or “a dult”. Such experiments further the understanding of human conceptual processes in a way that is not reductionist.
Word boundary identification in the Elman (1990) experiment highlights another important aspect of connectionist models: feature emergence. Finding boundaries between words was not a task pre-programmed into the network, nor was it intended by its creators. Splitting sentences into words was a consequence of the learn- 90 ing and adjustment processes in the network. The observation that some complex systems manifest higher level properties that are not attributable to the components is called emergence (Sawyer 2002).
Emergence of meaning As a cognitive approach connectionism claims to be based on the architecture of the human brain. Its main assumption is that cognitive functions can be modelled with the help of network structures (Thagard 2005). Cognitive processes are represented as activation spreading through the units of a network, the organisation of which may be constrained to provide a better analogy to brain function and/or structure. In principle, neural networks are only composed of units and weighted connections between them, so simplicity is an important advantage of this approach. All connectionist models can be deconstructed into on four elements: units, connections, activations, and connection weights (Mareschal et al. 2007). Units of a connectionist model are basic information processing structures similar to neurons in biological networks.
The units of a connectionist network can represent the function of one neuron, or a group of neurons (Thagard 2005: 116). As an analogy to biological networks, connectionist models are typically composed of many units arranged into layers. In most models units are organized in three layers: the input units, hidden units and the output units. The input units supply the information, which is computed by the hidden units layer and the solution is supplied by the output units. Because of this structure three layer neural networks can operate on arbitrary amodal symbols (the “mental” representation is removed from the “sensory” input having been computed in the hidden layer) as well as perceptual representations (“mental” representations remain dependent on the input) (Gibbs 2000).
Concept representation and prototypes
There are two ways to represent concepts in a connectionist network. Older connectionist models were localist (Elman et al. eds. 1998: 90) meaning that is each concept was represented by a single node. In contrast, most current network models rely on distributed representations (Rumelhart and McClelland 1987). In such networks propositions and concepts are dynamically represented as patterns of activation. Distributed representations have important advantages over localist networks for modelling conceptualisation. Similar to the brain, one set of units may represent a variety of concepts through different activation patterns. Distributed representations are also consistent with the prototype theory of the mental lexicon (Rosch 2011). A concept does not consist of a single activated node, but rather an averaged pattern of activation that occurs when a typical set of features is given as input (Thagard 2005: 116). Activation is spread over many units that may represent features, so concepts that are similar will cause similar patterns of activity (Elman et al. eds. 1998). Therefore, the network may begin to cluster similar concepts together resulting in the emergence of a prototypical representation, one that is composed of the features most common in the cluster. In a way, prototype structure can be seen as an emergent property in conceptualisation.
Emergence of features: language studies vs. mind models
Although they may make the most straightforward examples, emergence of meaning is not limited to conceptualisation models in connectionist networks. Feature emergence is also a linguistic phenomenon. In metaphor comprehension feature emergence occurs when a non-salient feature (one that is not commonly elicited as a feature of the source or target domains) (Becker 1997) becomes salient in metaphor comprehension (Utsumi 2005; Terai and Goldstone 2011). It could be argued that this type of emergence, and emergence in a connectionist sense are associated merely because of the name. However, if we assume that metaphor is a categorisation process (Thomas and Mareschal 2001) both definitions of feature emergence are applicable.
For example, if objectifica- 92 tion is an emergent feature of conceptual metaphorisation in the connectionist sense it needs to be shown as a property of the conceptual system. In the context of metaphor studies objectification can be considered an emergent property if it is demonstrated to be more salient in metaphor than without a metaphoric context. It appears that both of these approaches may be used provide convergent evidence for the status of objectification. In the connectionist paradigm objectification may be both a process and a feature, while in the metaphor comprehension paradigm it can only be interpreted as a feature. For the sake of clarity these approaches are presented in the form of a table (see Table 1) below. A quick comparison of the two approaches shows that neural network models are more focused on the process by which mental representations are created, whereas metaphor studies focus on comprehension and retrieval of features. It would be interesting to see how these contrasting accounts could be used to study the status of objectification.(essay代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)
