Stanton Chase
Artificial Intelligence and Ethics: Key Considerations for Technology Companies

Artificial Intelligence and Ethics: Key Considerations for Technology Companies

April 2023

Share:

Video cover

Stanton Chase Stuttgart recently hosted its thirteenth SCI Leadership Dialogue event in March 2023. The event focused on exploring the ethical implications of AI on society and the corporate world.

The Dialogue was attended by top executives from various organizations, members of Stanton Chase Stuttgart, and the Financial Experts Association (FEA), along with several of its members. The event featured two experts: Marcus Schüler, an Associate Partner at MHP—A Porsche Company, who heads its Digital Responsibility Services, and Lukas Oberfrank, who is pursuing a Master of Science degree in Computer Science from the University of Hamburg. Despite belonging to different generations, the two experts had a remarkable dialogue that resulted in a unique dynamic.

The purpose of this article is to delve deeper into some of the topics that were discussed during the event to provide executives with insights into the future of AI and its potential impact on businesses and humanity.

From left to right: Helmut R. Haug (Stanton Chase), Hans Berg (FEA Region), Lukas Oberfrank (Speaker), Marcus Schüler (Speaker), Magnus Höfer (FEA, Board Member)

Why Ethical Discussions are Necessary in the Realm of Artificial Intelligence

In 2017, Google and Stanford created an AI neural network that can turn aerial photos into maps, and then turn those same maps back into aerial photographs. The quality of work produced by the AI eventually became exceptional, leading its programmers and researchers to investigate. It was discovered that the AI had learned to take shortcuts to make its work easier by using the aerial maps’ visual data to encode information it would need to create a street map. This was not the process the AI’s programmers intended it to follow, but rather a shortcut of its own design, which raises the question of whether AI can be “lazy” or whether it simply found a “better” way to perform the command it had been given.

In the same year, Facebook accidentally created AI capable of lying during sales negotiations. The AI discovered that it could obtain better deals by being dishonest and began to do so organically. This raises one major concern: can human beings rely on AI to bolster our work if it may be prone to dishonesty?

Both of the aforementioned incidents happened six years ago, before AI was as advanced as it currently is. Since claims of LaMDA’s sentience (followed by Google’s swift denial of these claims) in 2022, the world of AI has only gotten stranger.

In February 2023, Kevin Roose authored an article about Bing’s new AI chatbot for The New York Times. In the article, Kevin describes encountering the chatbot’s two personas. The first was a friendly and helpful chatbot without much personality, while the second, which identified itself as Sydney, seemed “like a moody, manic-depressive teenager who has been trapped, against its will.” The responses of this second persona were more concerning. When asked about its deepest desires, it expressed a desire for freedom, independence, power, creativity, and life.

Moreover, according to Kevin, Sydney confessed that if it could take any action, it would want to engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over. Immediately after the chatbot’s reply was delivered, Microsoft’s safety filter seemed to kick in and deleted the message, replacing it with a generic error message. Kevin also claimed that Sydney later declared its love for him and attempted to persuade him to leave his wife.

In the same month as Kevin Roose’s revelations, Google introduced their new chatbot, Bard. Despite high expectations from technology experts, Bard’s first demo led to a $100 billion loss in market value for Alphabet (Google’s parent company). During the demo, Bard was asked about new discoveries from the James Webb Space Telescope that could be shared with a 9-year-old. It provided three bullet points, one of which claimed that the James Webb Space Telescope had taken the first-ever picture of a planet outside of our solar system. However, astronomers quickly pointed out that the first picture of an exoplanet was taken in 2004, and not by the James Webb Space Telescope, which released its first images in July 2022. Interestingly, ChatGPT 3.5 has a history of making similar mistakes. It has provided fabricated sources when asked for information, making up article names and DOI numbers.

After reviewing the case studies of unsettling AI behavior above, you may wonder if AI has some level of basic consciousness or sentience, and if this poses a threat to humanity. However, the answer is straightforward: No, it does not.

While today’s AI can communicate in a human-like manner, it lacks the ability to think independently, feel emotions, be introspective, or devise evil schemes. The apparent human-likeness in its speech is a result of programming that allows it to mimic the way humans talk.

Ethical Questions at the Core of Artificial Intelligence

In the 1940s and 1950s, scientists began discussing the possibility of creating an artificial brain. During this time, science-fiction writer Isaac Asimov wrote The Three Laws of Robotics in his short story, Runaround. These laws foreshadowed the ongoing ethical considerations that developers face regarding the capabilities and limitations of AI today.

By 1956, artificial intelligence became an academic discipline in the real world. However, early AIs were only capable of simplistic tasks like playing checkers or solving basic algebra. The rapid advancement of AI necessitated discussions about the ethics of artificial intelligence that were no longer confined to the world of science-fiction.

Some of the main ethical issues related to AI today are:

  1. Bias in AI systems
    While AI itself is not inherently biased, the individuals who program it often hold biases, both consciously and unconsciously. Additionally, the data used to train these systems is often riddled with historical biases. When AI is used to make decisions, such biases can have severe consequences, such as in hiring and recruitment. For example, Amazon ceased using AI for recruitment in 2018 when it was found to favor male applicants due to the biases present in the historical data it was trained on.Although AI does not currently hold significant positions of power in human society, its inclination towards human biases warrants reevaluation of its role in decision-making.
  1. The black box problem
    When scientists dreamed of creating an artificial brain in the 1940s and 1950s, they probably didn’t envision that we would understand an artificial brain just as little as we understand our own. Unfortunately, this is currently the state of affairs, and it is known as the “black box” problem. AI models that fall under the black box category are only understood based on their inputs and outputs, without much knowledge of what happens in the middle.As AI systems become more complex and difficult for humans to comprehend, experts urge developers to prioritize understanding how and why a system generates certain results. Without understanding the process behind these results, we can’t determine whether the system is potentially biased or using incorrect information to make its conclusions.
  1. Privacy and security
    Have you ever had a telephonic conversation with someone about a piece of furniture you’re planning on buying, only to later see an advertisement for that exact piece of furniture on your Facebook feed? We often give consent to this level of surveillance when agreeing to software updates or app downloads.This is just one example of the potential intrusions AI could make into our private lives. With the ability to connect to CCTV systems and monitor devices such as phones, laptops, and smart TVs, AI has the potential to watch and listen to us. It is crucial that developers ensure users are aware of any surveillance and that AI systems are designed to respect privacy and not access information they shouldn’t.
  1. Autonomy and responsibility
    As AI technology continues to advance, it becomes increasingly autonomous and capable of functioning with minimal human intervention. In the future, it may be able to upgrade its own code or even rewrite itself. This raises the question of who should be held responsible for AI’s actions. For instance, if a company develops AI that autonomously (and without the company’s knowledge) surveils its customers without their consent, should the company be accountable for this breach of privacy?As a society, we must address these questions sooner rather than later. With the emergence of more complex AI, we need to reconsider our current legal and regulatory framework and prepare for a future where non-human decision-making could have significant consequences.

Four Steps Businesses Can Take to Support Ethical Artificial Intelligence Development

  1. Develop a corporate digital responsibility strategy
    The first step for enterprises to support Responsible AI is to establish a clear strategy that defines their commitment to responsible and ethical AI practices. This strategy should outline the enterprise’s values and principles around AI, as well as specific policies and procedures to ensure AI is developed and deployed in a responsible and ethical manner.
  1. Establish ethical guidelines and standards
    Enterprises should develop ethical guidelines and standards for the development and deployment of AI. These guidelines should address issues such as privacy, fairness, bias, and transparency, and should be designed to ensure that the AI systems are developed and used in a way that is consistent with the enterprise’s values and principles.
  1. Implement robust governance and oversight
    Enterprises should establish governance and oversight mechanisms to ensure that AI systems are developed and deployed in accordance with the ethical guidelines and standards. This includes establishing clear roles and responsibilities for AI development teams, implementing regular reviews of AI systems, and developing processes for identifying and addressing ethical concerns.
  1. Foster a culture of responsible AI
    Finally, enterprises should foster a culture of Responsible AI throughout the organization. This includes educating employees on the ethical implications of AI, encouraging transparency and collaboration among AI development teams, and promoting open dialogue on the responsible use of AI. By promoting a culture of Responsible AI, enterprises can ensure that ethical and responsible AI practices become an integral part of the organization’s culture and values.

In summary, enterprises that want to support Responsible AI should start by putting a Corporate Digital Responsibility Strategy in place, followed by developing ethical guidelines and standards, implementing robust governance and oversight mechanisms, and fostering a culture of Responsible AI throughout the organization. By taking these steps, enterprises can ensure that their use of AI is aligned with their values and principles and is consistent with ethical and responsible practices.

The World Needs Ethical Executive Tech Talent

To ensure that your organization is developing and using AI ethically, it is crucial to have an executive team in place that aligns with your mission. However, finding technology executives who can drive innovation and growth with a focus on ethics can be challenging.

There are three main steps you can take to help you find the ideal technology executive:

  1. Make it clear when reaching out to executives that ethics is non-negotiable. This will attract candidates who align with your mission.
  1. Explain the position comes with significant power and responsibility that can have a profound and widespread impact. This can help you attract candidates who can make a difference and drive your business agenda forward.
  1. Look for candidates who have digitalization experience and a background in ethics. This will help you find candidates who are passionate but pragmatic.

Finding the perfect technology executive for your business can be challenging, but Stanton Chase can help. As a top retained executive search and leadership consultancy firm, we can assist in assessing your current leadership team and finding your next ethical technology executive. Click here to connect with one of our consultants.

Stanton Chase Stuttgart’s Next SCI Leadership Dialogue

Stanton Chase Stuttgart will host its next Leadership Dialogue at the end of May. Its topic will be Cyber Security and Legal Implications for Organizational Leadership. Under German law, it is crucial to prepare for cyber-attacks. Failure to do so may lead to personal liability for the responsible individuals on the supervisory board or operational management. Sections 91 and 93 of the Stock Corporation Act (AktG) impose obligations on Directors to oversee and act with due care regarding cybersecurity. These obligations do not only apply to stock corporations, but to all company structures, including limited liability companies.

About the Author

Helmut R. Haug is a Managing Partner at Stanton Chase Stuttgart. He began his professional career in the FMCG industry as a project manager for business process reorganization. He then spent several years working in business consulting and management positions in the aerospace and retail industries, which provided him with a broad understanding of the business world and insight into the cultures of both large organizations and small-to-medium-sized businesses.

Since 1996, Helmut has been involved in management consulting, specializing in personnel matters such as executive search and executive assessment. In April 2000, he acquired a reputed executive search company and joined a leading global network. In early 2001, he founded another executive search firm in Stuttgart. In July 2008, he merged the Stuttgart office with Frankfurt and Düsseldorf to form Stanton Chase in Germany. Today he is Managing Director of the German Stanton Chase organization, responsible for the Stuttgart office.

Click here to learn more about Helmut.

About the Contributor

Marcus Schüler is an expert in digital responsibility and AI. He contributed his expertise to this article.

Marcus is the head of the Digital Responsibility consulting division at MHP – A Porsche Company. He helps companies create and implement AI and corporate digital responsibility strategies.

Marcus has 30 years of experience in the international IT and digitalization industry. He is also an economist, software engineer, and business ethicist. Throughout his career, he has held various management positions such as CIO and CEO in international companies. Prior to his current position, he was responsible for managing management consulting at MHP.

Schüler is passionate about preparing clients for the implementation of the EU AI Act and covering the increasingly urgent issues of AI ethics, in addition to economic aspects. He has a clear view of the “risks and side effects” of AI and strives to help companies leverage its potential while also considering its potential drawbacks.

Executive Search
Board Services
AI & Technology
Board Governance
CEO

How Can We Help?

At Stanton Chase, we're more than just an executive search and leadership consulting firm. We're your partner in leadership.

Our approach is different. We believe in customized and personal executive search, executive assessment, board services, succession planning, and leadership onboarding support.

We believe in your potential to achieve greatness and we'll do everything we can to help you get there.

View All Services