服务承诺
资金托管
原创保证
实力保障
24小时客服
使命必达
51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展
积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈cognitive imaging of the cerebral cortex--论文代写范文精选
2015-12-25 来源: 51due教员组 类别: Essay范文
认知成像的一个重要的假设是当一个微分活动显示在研究中,它是可以复制的,因此普遍适用。换句话说,预计这个微分活动在所有健康的人类中也是常见的。下面的essay代写范文将进行详述。
Abstract
PET and fMRI are relatively new techniques, which allows imaging of the brain without disturbing it, and they are important tools in clinical treatment and investigation. They are also used for cognitive investigation, by which I mean investigation the working of the healthy brain (Frackowiak et al, 1997). Naturally, the main focus of investigation is the cerebral cortex. In this paper, I survey the recent literature for evidence about the replicability of these studies. The survey covers 125 primary research articles, taken from 21 different journals during the first 10 months of 1997. The main findings are:
The data about replicability in cognitive imaging is extremely sparse.
The sparse data that does exists is mostly negative, i.e. points to irreplicability.
These findings suggest that there is a problem with replicability in cognitive imaging, and an effort is required to resolve it.
1.Introduction
Cognitive imaging typically involve imaging the brains of the subjects in two experimental conditions, and comparing the activity of the brain in the two conditions. The regions in the brain where there is a different between the activity in the two conditions are then assumed to mark regions of cognitive processes that are different between the conditions.
An important assumption of cognitive imaging that underlies the interpretation of these studies is that when a differential activity shows in one study, it is replicable and hence generally applicable. In other words, it is expected that this differential activity is common to all healthy humans. This assumption is based on a more fundamental assumption, that humans have a common cognitive architecture, and that the cognitive imaging data reflects the characteristics of this architecture.
Is the assumption of replicability correct? In the words of one of papers in this survey (Schreus et al, 1997):" Replication is a necessary part of scientific investigation and it has met with variable success in PET. Investigations of seemingly similar paradigms do not often yield the same result." In this paper, I survey the literature for information about the replicability of cognitive imaging.
first, we need a clear idea what replication means in the context of cognitive imaging. The data that comes out of brain imaging studies is a set of small regions, with dimensions of few mm across, that are differentially active between the different experimental conditions. These are commonly called activations. In most of the cases, the actual data is the location of the peak of activity, in cartesian coordinates on a `standard' brain. Hence, a replication means finding the same activations in n a different study, using the same or closely related experimental paradigm. The `same activations' in the previous sentence have to be the majority of the activations, from two reasons:
If only a minority of the activations is shared between studies, this may be a result of chance, rather than reflect a real replicable data.
Even if we know that some minority of the activations in a study are going to be replicable, it still means that most of the activations are irreplicable, i.e. they are noise. Unless we have some algorithm to determine which activations are real data (i.e. replicable), using a data from a single study will introduce more noise (irreplicable data) than real data. Thus even if it is established that for most of studies a minority of the activations does represent real data, the data from a single study on its own is more confusing than useful, and only multi-study data is reliable.
Therefore, a match between two studies of a minority of the activations is at best a weak replication, and needs further testing before it can be taken as a proof for replicability in cognitive imaging.
In general, replicability is best tested by different research groups repeating the same experiment in a coordinated way, and then comparing the results. However, there is a lack of such studies in cognitive imaging. The only large study to investigate the replicability of cognitive imaging is the study by Poline et al(1996). In this study they compare the results from 12 European centres, in a very simple task. Their Figure 2 show the results from the 12 studies, and they clearly diverge. In particular, the area of activation ranges from almost nothing to almost all the anterior half of the left hemisphere. The authors say: "The high consistency between these patterns is seen clearly, especially among centers with high sensitivity", but the only similarity seen in their figure is the tendency for the cortical activation (the figure contain sub-cortical activation as well) to concentrate in the anterior half of the left hemisphere. The authors attribute this variability to different sensitivity of the machines in the different centers. In their conclusions, the authors say they found in the cortex three regions where almost all the studies have activation. However, to do that they:
Report the results in terms of Giry and Sulcy, rather than the usual way of giving coordinates of peaks of activation. This is a much more gross unit, so make it easier to match activations between studies.
Reduce the threshold for detecting activation to p<0.2. Normally, a threshold of at least p<0.05 is used, and in many cases much higher threshold.
Both of these manipulations increase a lot the chance of finding common activation. Since these manipulations are very different from the common practice in the field, the results cannot be used to estimate the replicability of the results of other studies. In the appendix to their paper. Poline et al(1997) give the coordinates of the peaks that were identified by each centre, at the threshold of p<0.05. The coordinates are presented in tables, with matching peaks between centres align together. Inspection of the table shows that there are no peaks where the majority of the centres have activation, and that the peaks distributions of different centres do not seem to relate to each other. The authors themselves say: "We observe large variability." They don't give any further analysis of this data.
Another comparative study is Poepple(1996), which compares the results of five studies of phonological processing, and says in his conclusion section: "In a comparison of five PET experiments on phonological processing that were very similar in their task demands, no overlap in results was found in local increases in rCBF across experiments." However, he does not discuss the overall problem of replicability of cognitive imaging. In their response (Demonet et al, 1996), the authors of these articles highlight the fact that all the studies show some activation in Broca area (their table 1), but they have to admit that outside Broca area the results diverge, and explain this by experimental variations.
Because of the lack of large studies to test the replicability of cognitive imaging, I decided to do a survey of the recent literature about replicability in cognitive imaging. The results of the survey are presented in the next section, followed by a section about possible explanations for the lack of replication that is seen in this survey, and an implications section.
2.Literature survey.
In the survey, I included any research article or a review that I could find, which reports on cognitive imaging (by PET, fMRI or SPECT) in healthy humans, from the beginning of 1997 to the time of writing (October 1997). This is not meant to be an absolutely exhaustive cover of the literature at these period, but to be a representative sample of it. The survey included 125 research papers (excluding reviews), from 21 different journals, and these are marked by `**' in the references section.
In this survey I don't try to do any evaluation of the conclusions of these studies, or analyse the results, and I don't touch on the methodology of the data processing. The only thing I am looking for is evidence concerning replicability of cognitive imaging. In particular, I am looking for comparison of data between different studies, and Reports of success/failure to reproduce the results of other studies. Because, as will be seen below, these are very rare, I also look for comparison of data between individuals in the same study, assuming that this measure can give us hints about inter-study replication.
I divide the discussion to two major parts: one about replicability between studies, and the other about replicability between individuals in the same study. In addition, I also survey those reviews which specifically discuss cognitive imaging results in the same period, in a separate part.
2a. Replicability across studies
In the 125 studies that I surveyed, there wasn't any report of a robust replication of the results of another study. The closest is the study by Hyder et al(1997), which compare their fMRI results with a previous PET study, and say:".., the agreement between the Talairach and Tournoux coordinates (18) for certain fMRI and PET activations is excellent. " However, out of 11 activations that they identify, only 3 match of the PET study, and of these one is 13.4mm away from the PET study and the second is 9.4mm (see their Table 2), which are clearly not `excellent match'. The authors also compare activations from a their own previous study and the same PET study (their Table 3), and here one activation (out of 5) matches. Thus in this study, we see, at best, a weak replication. Hyder et al (1997) do not present any analysis about the statistical probability of this match.
Only two other studies show data from different studies. Nobre et al(1997) show data from several studies in their figures 8 and 9. The data clearly diverge, and the authors say:"The variability in the reported loci of premotor activation in eye movement and visouspatial-attention studies is not negligible, and remains a point for further discussion and studies." Taylor et al(1997) show in their figure 5 results from five studies that used stroop stimuli, and these are also divergent. The authors comment on this figure is: "As is evident from inspection of Fig.5, reported ACC activations cover large extent of the cortex, and some of the apparent inconsistency may arise from a mistaken tendency to assume that foci separated by 2-3 cm represent the same function."
Three studies present coordinates of matches of an individual regions of activation. Zald and Padro (1997) find that the coordinates of a `weaker band' in their data match the activation in a previous PET study, but say nothing on the main areas of activations they found. Kanwisher et al(1997a) claim that the main peak which they found is close to peaks in previous similar studies, and quote the coordinates of almost identical peak in another study, but do not compare the rest of the activation in these studies. Orban et al(1997) pick matches to their results from the literature purely by matching coordinates, irrespective of the tasks that were used in these studies.
Two studies report failing to replicate the results of other studies, without presenting data. Kanwisher et al(1997b) compare their results to other studies, and find they do not match. They explain it by differences in the experimental settings. Dhankar et al(1997) report trying to reproduce the results of a previous study, which were unsuccessful.
Two studies report (without presenting actual data) replication of their own previous studies. Schreus et al(1997) claim that their study "replicates and expands our previous findings." However, later they say: "It is interesting to note that there some differences between the present study and our previous study," and go on to present a list of differences. These authors are clearly worried about the replicability of the results, because the next paragraph contains a discussion of replicability of PET studies, starting with the quote in the introduction above. In this discussion, they present two cases of discrepancies between studies, but none of replication. They go on to explain this by experimental variations. Jonides et al(1997) claim `excellent agreement' between the regions of activation in their study and in their own previous studies. However, they did the comparison using, in their words, a `relaxed criterion' of p<0.1, which is clearly not `excellent'.
The rest of the 116 studies either ignore completely the data of other studies, or compare it in qualitative terms, or compare the interpretation of the data (rather than the data itself).
3. Is the underlying activation replicable?
The survey in section 2 show a virtually absence of replication in cognitive imaging studies. Some possible explanations are mentioned in section 2, including variations in the experimental paradigm, differences between the machines that are used, and variability in gyral morphology. However, it is hard to see how these explanations can account for the virtual lack of replication, the failure to replicate in those studies that did try to do it (Kanwisher 1997a; Dhankher 1997), and the variability between subjects in the same study. These explanation could account for some failures of replication, but not for virtual absence of replication.
Moreover, even if these explanations are true, it does not solve the real problem of lack of replication. If the results of cognitive imaging are so sensitive to these artefacts, there is no way to know what part of the results is artefacts, and what part is real data.
The `opportunistic system' suggestion by Bly and Kosslyn (1997)(section 3c above) does seem good enough to explain the variability. In fact, this suggestion is quite extreme in its implications, because if it is true, we should not expect any replication ever, even between images of the same individual in different times. As mentioned in section (3b) above, I found only one case that reported about this possibility (Kanwisher et al,1997a), and they found replicability in the one subject they tested it on.
A less extreme suggestion is that the activation which is seen in cognitive imaging is replicable in each individual over time, but is variable between individuals, and hence between studies. In this case, the averaging in each study between the subjects picks up those activations that are shared by chance between significant proportion of the subjects in this study.
This intersubject variability can come from two non-mutually-exclusive reasons:
The current methods of PET and fMRI may not actually tap the real patterns of activity in the cortex. The volume of each region of activation in these studies is several tens to several hundreds mm3, and contain several millions to several tens of millions of neurons. Cognitive imaging tell us about changes in the average activity of these neurons, but not about changes in the pattern of activity(i.e. which neurons are active). Thus these methods may be missing a lot of processing which happen without a change in the average activity, and hence may have many false negatives. On the other hand, false positives, i.e. increase in average activity in some region which is not associated with important processing, maybe simply the result of the fact that the cortex is not a perfect system.
The underlying cognitive architecture may not be the same between individuals (presumably because it is mostly learned). This possibility would be very disappointing to many cognitive scientists, but it is in accord with data from brain damage. This data shows that for most of the cortex (the `association areas'), there is weak, if any, correlation between the location of damage and cognition defects (Nolte, 1993, chapter 15; Wilkinson, 1992, Ch. 12; Brodal, 1992, ch. 17; DeMeyer, 1988, ch. 13). The areas where there are correlations are the sensory-motor areas, where the correlation is determined by input/output into/outof the cortex. Outside these regions, there is a correlation between damage to some regions and some kinds of aphasia (Wernicke's region and Broca's area). For most cognitive functions, there does not seem to be any function/location relation. If humans have had a common cognitive architecture, we should expect function/location correlation over most of the cortex.
A weaker version of this possibility is that human has common cognitive architecture, but it is not associated with specific locations in the cortex. Since cognitive imaging gives us data about locations, this weaker version still means that cognitive imaging does not tell us a much the common architecture itself.
The lack of studies which explicitly try to replicate previous results make it impossible to conclude what is the reason for the lack of replication in cognitive imaging. However, the virtual lack of replication and the variability between individuals in the same study hints that it is a result of variability in the underlying data, rather than lack of effort for replication. A better answer to this point can be given only when more replication studies are published.(essay代写)
51Due网站原创范文除特殊说明外一切图文著作权归51Due所有;未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象,51Due保留一切法律追诉权。(essay代写)
更多essay代写范文欢迎访问我们主页 www.51due.com 当然有essay代写需求可以和我们24小时在线客服 QQ:800020041 联系交流。-X(essay代写)

