AI

AI Ethics: Why It’s Important, Now, To Work On Ethical AI

header image

AI ethics is a pivotal topic to assess the future possible developments of artificial intelligence. A responsible use of artificial intelligence is the key to safety.

 

 

Photo by Greg Rakozy on Unsplash
 


 

Ai ethics is one of the main concerns of investors and analysts, especially since the introduction of OpenAI’s ChatGPT, which became the fastest growing application. 

Ethics is necessary if we want artificial intelligence not to become dangerous and to be used properly - also for what concerns the fintech industry, since it might be particularly dangerous to use not properly trained AI in finance. 

Why AI ethics makes headlines

Ethics in artificial intelligence makes headlines for both positive and negative reasons. 

While Microsoft recently reduced its AI & Society department – leaving only 7 people during one of the waves of layoffs that involved the company, many are the analysts and organizations that try to think about the topic and make reflections on why ethics matters. 

This also includes international organizations and politics, something that maybe can help everyday users – maybe still too unaware of the progress of artificial intelligence – to be assured that AI is not only a business topic. 

On November 23, 2021, UNESCO released a text, “Recommendation on the Ethics of Artificial Intelligence”, which was then adopted by the 193 member states. 

Recommendations open by “Taking fully into account that the rapid development of AI technologies challenges their ethical implementation and governance, as well as the respect for and protection of cultural diversity, and has the potential to disrupt local and regional ethical standards and values”.

The reference to multiculturalism is important in the case of AI. 

As we will see in a moment, it is important to consider that not everyone is able to manage and use AI, and if it remains a prerogative of tech professionals and enterprises it might be hard for some cultures and segments of the population to get access to this important technology. 


Do we have sentient AI?

 

We don’t have – at least, not yet – sentient AI. 

So far, AI based tools are trained by people and data. If under a certain perspective this means that AI can’t be considered too dangerous yet, it also means that if people provide biased data, then the answers provided by AI are biased. 

The same applies if data and training is provided by only certain professionals and in certain countries. 

As reported by MIT, the gender gap in STEM (science, technology, engineering and maths) is still extremely significant, and women with a job suited to their studies in one of these fields only amount to 28%. 

A report published by the IDC (International Data Corporation), the Worldwide Artificial Intelligence Spending Guide, tells us that investments in AI should reach $154 billion in 2023. But where are these investments concentrated?

As reported by InvestGlass, the countries where investments are concentrated are the United States and China. Also Japan, Canada and South Korea are increasing investments and strategies that involve AI. The European Union is not the most advanced region for what concerns artificial intelligence – even if some countries like Germany and France are developing an interesting environment for artificial intelligence. 

All this data shows that not everyone is involved in this revolution, and this – of course – can be detrimental to a valuable and ethical development of AI. 

If AI will remain too concentrated in certain fields and countries, data it will produce will be necessarily biased

If multiculturalism might not be properly addressed yet, investors are already looking for a technology that can be socially responsible and ethical.
 

What do investors think about AI? 

In the past years, a general increased awareness related to social responsibility also brought investors to prefer businesses that are not harmful for societies. 

In the case of artificial intelligence, it’s hard not only to create global frameworks aimed at regulating the technology, but it’s also hard for investors to fully understand what’s actually ethical in terms of artificial intelligence. 

AI is relatively new, and giving it a correct context is made even harder by the fact that it constantly changes. 

That’s why investors are using different methods to assess the possible future developments of an AI business, as well as its ethics as time passes  and changes are made. 

As reported by TechCrunch, it seems that investors might find it more useful to assess the characteristics and qualities of the project owner, to better understand how he or she might react to new frameworks and how they want to manage an AI project in spite of constant changes. 

So, even if we’re talking about AI, humans still have the last saying – and the more ethical the people who use AI, the more ethical will be AI in the future.
 

Final Thoughts

AI ethics is not an easy topic, and it isn’t easy to assess how AI can be ethical.

AI is not sentient, it doesn’t have a soul – independently on how a soul can be defined. 

Despite this, it is pivotal to work on AI ethics right now, to avoid as many dangers as possible in the future.

 


 

If you want to know more about fintech news, events and insights, subscribe to FinTech Weekly newsletter!
 

 

Related Articles