代写范文

留学资讯

写作技巧

论文代写专题

服务承诺

资金托管
原创保证
实力保障
24小时客服
使命必达

51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。

51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标

私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展

积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈

AI Algorithms: The Hidden Negative Ethical Threats

2021-06-15 来源: 51Due教员组 类别: Essay范文

 在人工智能向世界展示其巨大潜力之后,算法也越来越受到公众的关注。更多的时间、精力和金钱投入到了算法的研究和开发中。然而,“每个故事都有两面”的谚语也适用于算法的情况。当开发人员在探索和扩展算法的功能和复杂性时,隐藏的弱点也暴露了出来。当工程师遇到问题时,他或她会尝试将问题与他们的知识联系起来,并最终通过使用适用的公式来解决问题。对于算法,他们通常首先将问题与数据库进行比较,这类似于人类参照他或她获得的知识。在确定问题后,算法将根据公式、程序或程序来产生解决方案。问题是,算法也有学习能力。他们几乎可以从遇到的任何事情中学习。算法导致了许多学科道德标准的恶化,如果不进行干预,它们将对未来的社会正义构成巨大威胁。

After artificial intelligence has revealed its tremendous potentials to the world, algorithms are also gaining more attention from the public. More time, effort, and monetary investment is being poured into the research and development of algorithms. However, the proverb of “Every story has two sides” also applies to the situation of algorithms. While the developers are exploring and expanding the capabilities and complexities of algorithms, the hidden weaknesses are also exposed. When an engineer encounters a problem, he or she will try to relate the problem back to their knowledge, and eventually solve the problem by following applicable formulas. For algorithms, they usually compare the problems with their databases first, which is similar to a human referring to the knowledge he or she acquired. After identifying the problem, algorithms will follow formulas, procedures, or programs to produce a solution. The problem is, algorithms also have learning capabilities. They can learn from almost everything they encounter. Algorithms contribute to deteriorating ethical standards in many disciplines and without intervention they will place a huge threat for social justice in the future.

Before getting into the negative ethical implications of algorithms, it is important to understand what algorithms are. Algorithm is an important concept in computer science. Basically, an algorithm is a set of directives created to solve a specific type of problems. With the wide application of computer applications, the algorithm has penetrated the field of human knowledge and profoundly changed human thinking. It is an important tool for humans to understand the world and transform the world. Robin K Hill (37) also explained the characteristics of the algorithm in his article "What an Algorithm Is". He believes that the algorithm is limited and needs to be expressed in a limited time and space. The algorithm is also abstract and has no spatiotemporal trajectory. This means that the algorithm is universal and can be applied to a single specific instance of the task (Hill 42). In the era of weak artificial intelligence, algorithms are the brains of computer programs that replace human labor and greatly enhance the efficiency of the decision-making process.

There are many people who confuse algorithm ethics with robot ethics. Pop culture such as novels and movies have depicted different types of robots that either taken control of the world or become loyal servants to human beings. They create the sense that robots and AI ethics are not that relevant to the real life. However, in addition to robots, the specific forms of artificial intelligence implementation are diverse, such as various intelligent software and chat robots currently used in medical, legal, educational, financial, even music, painting and other fields, which are driven by powerful calculation abilities and big data, as well as intelligent technologies such as natural language understanding and speech recognition (Tzafestas 55). Such applications also bear many ethical responsibilities different from robot ethics. At present, most discussions on the ethics of artificial intelligence focus on the ethics of robots, because this part is most likely to resonate with the public. However, more attention should be given to those invisible intelligent software and algorithms because unlike robots from the Si-Fi movies, they have already created profound implications on the human society.

Discrimination is one of the most obvious ethical problems caused by algorithms nowadays.  Discrimination is a long-term topic originated from the colonial history. Whether it is racial discrimination or class discrimination, it has entered the public realm for centuries at least. However, algorithm discrimination is still a relatively new concept in today's world. The algorithm itself is essentially a mathematical expression. According to Green and Viljoen (5), before the algorithm makes a decision, it will use all its own logic as a standard, based on information and data, to divide people into different categories and affix various types of labels on people. When making algorithm decisions, the labels on people will be a certain measure, which will further affect decisions and contribute to discrimination implicitly. Algorithmic discrimination has the premise of embedding human values. Algorithms are not created out of thin air, but are designed by human beings. Even if the designers or an algorithm is totally fair and unbiased, an algorithm still goes through tremendous risks of contamination after processing data from the internet. In other words, algorithms naturally and inevitably contain ethical features.

Algorithm discrimination is going to pose a great threat to social justice and ethical values with the increasing application scenarios. In March of 2016, Microsoft ’s chat robot Tay was released to interact with Twitter users. Within one day, the “maiden chatbot, of innocent heart, benevolent desires, and amiable disposition” was corrupted into “a malevolent, foulmouthed crone” (Davis 21). Tay was badly influenced as soon as it started chatting with users, and became an extremely racist algorithm in such a short period of time. This case reveals the discrimination problems that artificial intelligence and machines cannot avoid. Is it reasonable for the algorithmic robot behind a financial institution to refuse a loan to someone? Would legal software determine someone's crime because of a person's race or ethnicity? Since artificial intelligence tends to systematically replicate all human ethical flaws, both intentional and unintentional, in the data used for learning, there are countless opportunities for negative ethical implications. This creates a hidden but alarming future for AI algorithms.

Another major application of algorithms in the modern world is personalized recommendations on the internet. Many developers will consider the user's sensitivity to price when designing a product recommendation system. If a user is often opting for the cheaper products in a category, the algorithm designed by the developer determines that the user is highly price-sensitive and relatively "poor." At this time, when the system wants to recommend him or her, it will give priority to some particularly cheaper items. There are also instances where an algorithm detects the user type or user history and charges differently for the same products. For example, some games detect the type of devices of the users and charge iOS users more than the Android users. In some shopping websites, if a customer has turned into a loyal customer and makes regular purchases, the level of discount will be reduced for such customers and the deduction opportunities will be given new customers who have never purchased at the store. Such discrimination has outrage customers in the past. In the long term, customers are getting stereotypical treatment by the algorithms without knowing it, and their own interest are hurt.

There are many people who consider the above-mentioned ethical problems to be nothing but minor side effects of the new technology. For example, some people hold the belief that since algorithms are based on mathematics, which is not biased or discriminatory. “Governments are increasingly looking to utilize Automated Decision Systems (sometimes called “Robo-adjudication”) to gain efficiencies in the administrative state” (Beatson 316). Indeed, machines are superior to human beings in the ability to process large amounts of data and obtain “logical” results. However, there are serious limitations of mathematics when it comes to applications in the social sector. For example, there are still a high frequency of biased results generation by AI-Supported Adjudication. Therefore, the role of AI algorithms is largely confined within providing relevant information in court for references, and the ultimate decision-making power is still in the hands of human beings (Beatson 330). Although mathematics is an extremely precise and powerful tool, there are certain values and objects in the physical world that simply cannot be quantified. Some practical questions are well-beyond the scope of math and reveal that math is not the ultimate answer to ethical algorithms.

There are also people who consider algorithm discrimination to be not a big deal. Indeed, the racist chatting robot and the personalized marketing of products do not seem like serious ethical problems, and may be fixed with minor patches. However, such arguments have failed to consider the long-term implications on social justice. For example, a crime risk assessment algorithm “COMPAS” used by some courts in the United States is considered to cause systematic discrimination against black people (Beatson 315). If a black man commits a crime, he is likely to be mislabeled by the system as having a high risk of crime, and thus be sentenced to imprisonment or a longer period of time by the judge instead of the probation. In addition, similar incidents of discrimination are gradually increasing. Some recommendation algorithm may seem harmless, but if the biased algorithm is applied to criminal justice, financial credits, employment assessments where central personal interests are concerned, they cause much severer damages. Therefore, algorithmic discrimination is indeed a much bigger deal that many people would expect.

In the foreseeable future, human dependence on algorithms will only increase. If the ability of humans to process information is further weakened in the future, it is likely that the machines push us more information, and humans become more willing to rely on algorithms and machines. For example, to travel to a city, people may need a machine to recommend a traveling route. When entering an education system, people may hope to obtain high-quality course recommendations. When planning a future career, algorithms will list the planning options for people to choose from. At this time, the computer will make recommendations that are most suitable for a person based on the previous behaviors, the family background, and consumption habits. Consequently, the poor and the rich may end up in different parts of the city and may have entirely different life plans and goals even before their life even begins. The gap between the rich and the poor may be further widened. The more people are relying on algorithms, the more they are impacted by the ethical problems.

There are also people arguing that the ethical implications of algorithms are controllable by controlling the coding process. The code is the governing directive of all algorithms and by controlling the coding process, the program designer will also gain a fair level of control over the outcomes of the algorithm. For example, according to Florez, et al., some “architecture of the code can be exclusionary or discriminatory”. In the predictive policing sector, for instance, code structures can be the fundamental reason why racial minority groups are targeted more frequently and convicted with heavier crimes (Florez, et al. 155). However, despite the importance of code architecture, controlling the coding process does not mean control over the ethical implications of the entire algorithm. As illustrated in the case of Tay above, while the designers of the robot did not have an exclusionary architecture, the input from the Twitter users became the major source of negative impact. Some “primitive” algorithms are already behaving in puzzling ways that are incomprehensible for human beings. The input data shown in the Tay case is also adding to endless uncertainties. Therefore, trying to make fully “controllable algorithms” seems to be an impossible mission in the current stage.

Having realized the potential dangers of algorithms, it becomes important to find measures to mitigate the negative implications. Transparency is the first potential solution to algorithm discrimination and other ethical problems. Algorithm discrimination, from the root point of view, is caused by improper design and application of algorithms. In order to control the social impact of the algorithm from the root, transparency and accountability are the key. There have been voices opposing transparency because some people regard transparency as the enemy of accuracy or efficiency (Martin 842). However, transparency and accuracy are not necessarily contradicting against each other because the transparency of algorithms does not require full transparency. Instead, there is a whole range of transparency created targeting different goals of bias detection, process justice, corporate responsibility, etc. (Martin 845). Even though algorithms seem to be difficult to understand, transparency efforts will be the first step towards accountable coding.

In addition to transparency, accountability of the algorithm from the enterprise perspective is also needed to mitigate the negative ethical implications. For example, OPAL open algorithm technology is an innovative solution proposed by the Massachusetts Institute of Technology, and TalkingData is also involved in R&D and application. The principle is similar to federation learning, and the calculation result is obtained by moving algorithms without moving data. But OPAL is more flexible. It not only supports machine learning algorithms, but also other statistical algorithms, and has a wider application (Lepri, et al. 621). By adding technologies such as auditing and blockchain, designers can avoid risks of contamination more comprehensively. In addition to technology, management is also an important factor to contribute to accountability. Taking the current pandemic as an example, the personal information we provide for disease prevention should be collected in private, encrypted and transmitted, and only those with the corresponding level and authorization can view the limited and necessary real personal information, so that labeling and discrimination are minimized.

Finally, industry ethical standards are needed to regulate companies. The collaboration between different stakeholders is necessary for such standards to be established. The government is responsible for providing legal frameworks that guarantee the basic rights of users and basic code of conducts for the companies. The companies are responsible for following the government regulations and helping their customers make informed choices about their algorithm products and services. A sense of corporate social responsibility needs to be established so that every algorithm goes through an assessment system to determine their ethical implications before release into the market. There may also be third party supervisors that do not have involvement in the profit of the companies to establish fair standards that apply to all competitors in the market. In addition, the media could also assume supervising responsibilities and help the public identify the potential discriminating or unethical algorithms.

We live in an era of unprecedented algorithms. While bringing convenience and benefits to the lives of human beings, algorithms also bring discrimination, biases, and other ethical problems in their application. Such problems are already revealing in some cases, such as the Amazon recommendation system and the Tay robot by Microsoft. While algorithms are essentially based on mathematics and are controllable to a large extent, the increasing use of big data and deep learning are creating huge uncertainties in the morality of algorithms. Without effective intervention, disasters in morality and social justice could happen in the future. Therefore, the current ethical breaches of algorithms should not be underestimated, because they point to greater risks of social injustice in the future where human beings are more and more dependent on algorithms. To avoid such an unjust future, transparency, accountability, and ethical standards are urgently needed for all types of algorithms.

51due留学教育原创版权郑重声明:原创优秀代写范文源自编辑创作,未经官方许可,网站谢绝转载。对于侵权行为,未经同意的情况下,51Due有权追究法律责任。主要业务有essay代写assignment代写paper代写作业代写、论文代写服务。

51due为留学生提供最好的论文代写服务,亲们可以进入主页了解和获取更多代写范文提供论文代写服务,详情可以咨询我们的客服QQ:800020041。

上一篇:Explain and Assess Russell’s O 下一篇:The Dark Sides of Strong Organ