As the adoption of artificial intelligence (AI) increases, it is important to consider ethics in the development and deployment of AI, to avoid risks such as bias, discrimination and exclusion.
This was stated by speakers at a webinar organized by the Institute of Information Technology Professionals of South Africa (IITPSA) on Responsible AI in Africa.
The webinar, organized by the IITPSA Artificial Intelligence and Robotics Special Interest Group (SIGAIR) and the Social and Ethics Committee, explained how AI could be used responsibly to drive prosperity for all in Africa.
The chair of the IITPSA social and ethics committee, Josine Overdevest, highlighted several ethical risks related to AI: “AI can tend to be biased and discriminatory – for example when you only have developers from of a single demographic group. They may disregard the views of others. She noted that privacy risks could arise in the capture and processing of data, as well as the use of AI for surveillance purposes.
“In fair and interconnected societies, transparency is important: for example, an avatar must be identified as such, or people must be informed why certain decisions were made by AI,” she said. declared.
The impact of AI on people needs to be carefully studied, Overdevest said. “We need to consider the question of job loss: what do we do with people who lose their jobs because of AI, and how can we upskill them? Another important issue is the digital divide: how can we ensure that everyone we design these technologies for can access, use and understand them? »
She highlighted concerns about autonomy and control, dependence and manipulation, misinformation and ethical decision-making. “We need to ensure that AI systems make ethical decisions that take into account cultural and moral values,” she said. “Technological development outpaces regulatory development, but we cannot wait until there is more regulation around AI ethics before acting responsibly.”
Overdevest highlighted the importance for companies designing and deploying AI to market to do so responsibly. “Business incentives related to ethical and responsible AI include cost reduction, ESG compliance, brand resilience, and employee recruitment and retention, particularly among younger employees who want to work for an ethically driven company. responsible. »
AI opens up new opportunities in Africa
Zambian software engineer and Rhodes Scholar, Dr. Fredah Banda, highlighted the potential for AI development in Africa: “AI has the potential to revolutionize sectors such as health, agriculture, education and finance. »
Dr Banda said: “In healthcare, AI in image processing and analysis is used to improve diagnostic accuracy, thereby speeding up the process and access to healthcare. Apps and online services improve access to healthcare. AI helps address complex medical challenges more effectively and also supports health monitoring and reporting, including smart wearable devices that can track health parameters to support a diagnosis. Examples of AI put into practice in healthcare in Africa include HealthMap which tracks outbreaks, Babylon Health and mPharma.
In agriculture, she highlighted that AI helps support food production, for example by being used with drones to help farmers monitor crop growth and detect diseases. AI is also used in solutions to predict weather conditions and inform planting decisions, as well as improve supply chain management and reduce food waste.
“In finance, the most common use of AI is automated credit scoring,” she said. “However, it also plays an important role in areas such as fraud detection, risk management, algorithmic trading and market research.”
“AI is taking the world of education by storm by providing personalized learning and individualized feedback to learners. It can also be used to help teachers grade homework, create lesson plans, and identify struggling students. It also improves access to education for students in remote or underserved areas,” said Dr Banda.
Priorities for responsible AI
Surveys of webinar participants gauged their views on ethical considerations related to AI. Asked which ethical consideration is most important for responsible AI in healthcare and telemedicine, 73% responded to data privacy and security, 13% responded to transparency in decision making. decision in AI and 7% each responded to inclusiveness in access to healthcare and fairness in diagnosis. In financial services and fintech, 47% said avoiding discrimination in credit scoring should be a priority, 40% said data privacy and security should be the main consideration, and 7% each said the main consideration should be ensuring informed consent and equitable access to financial services. services.
Among the top considerations for responsible AI in agriculture, 36% voted for promoting sustainable agricultural practices, 29% for reducing environmental impact, 21% for equitable access to agricultural technology and 14% for the fair treatment of small farmers. Regarding considerations related to online education and learning, 33% said promoting educational equity should be the top consideration, 25% each ranked student data privacy and addressing bias in recommendations as a priority, and 17% said personalized learning experiences should be a major consideration.