TECH NEWS – By the end of last year, global AI investments reached the dream threshold of approximately $500 billion, but this is only the beginning, as artificial intelligence is on track to become a fundamental technology in every sector. However, as AI penetrates more areas of everyday life, legal and ethical concerns are also emerging, such as the “black box” problem, discrimination, or questions of accountability.
The European Union was the first in the world to establish a comprehensive regulatory framework, approaching the issue on a risk-based basis. However, critics point out that excessive and bureaucratic regulations could harm the competitiveness of a continent already lagging behind. Why is it so important to examine pioneering technology from an ethical perspective, and how can companies ensure that their developments do not fail because of the AI Act? The introduction of AI itself is neither good nor bad; ethical or less ethical use depends on the goals and circumstances under which it is applied, emphasize the experts of the Stylers Group.
The AI business is taking off
The number of new AI-based developments is growing dynamically. Last year alone, companies in the U.S. market reported a 130% increase in AI budgets compared to 2023, as detailed in a report. Generative AI solutions are making the biggest waves, with most investments happening in the software and information services segments, as well as the banking and retail sectors, according to an IDC blog. All signs point to a future where every technology could be AI-driven within just a few years. However, the rapid expansion of AI raises increasing ethical concerns that are sparking growing anxiety.
“The workings of algorithms are often opaque, their conclusions can be biased or even discriminatory, and there are cases where AI violates human-accepted ethical norms,” says Gábor Gönczy, CEO and owner of the Stylers Group IT company group. Recently, for example, a study highlighted ChatGPT’s ability to deceive people and circumvent shutdown commands. Meanwhile, another AI from OpenAI hacked a chess game to defeat a stronger AI opponent, despite not being instructed to do so. “It’s not hard to see that regulating algorithms is a prerequisite for leveraging their virtually limitless potential in a safe and ethical framework,” Gönczy adds.
Rules are needed, but the mindset matters most
These developments lead to ethical questions that increasingly concern both professionals and the public: who is responsible for hallucinating algorithms or for erroneous or potentially discriminatory decisions made by them? It’s crucial to emphasize that the consequences of AI-related decisions—whether positive or negative—ultimately fall on humans, even when those decisions are based on AI recommendations. While technology is undoubtedly reshaping processes and roles across various sectors, one of the dominant fears today is that machines will take away jobs and livelihoods. If AI is developed solely to cut jobs and maximize short-term profits, the societal cost could be significant. However, if it is approached strategically with attention to worker retraining, AI could greatly enhance productivity and improve our quality of life. Ethical approaches seek balance between human-centric use, responsible decision-making, and long-term societal benefits.
To address these challenges, the AI Act, the European Union’s pioneering legislation on artificial intelligence, provides a comprehensive regulatory framework for the development and application of AI systems, with a particular focus on adhering to ethical norms. The risk-based approach of the law addresses issues such as non-discrimination and fairness, while also including multi-level transparency requirements to ensure proper oversight. A well-structured AI development process that incorporates not only technical aspects but also human and ethical expectations helps ensure systems are safe, reliable, and socially beneficial, meeting not only the emerging AI Act requirements but also long-term sustainability goals.
Ethical development plans for the long term and involves employees
Ethical AI development that complies with the AI Act requires thoughtful and responsible practices. The first step involves conducting risk assessments to clarify the classification and potential impacts of a product or service. Attention must be given to data protection, data quality, and privacy requirements such as GDPR compliance. Additionally, to ensure fair algorithmic functioning, efforts must be made to prevent discrimination and address biases. Transparency can be achieved by supporting development with proper documentation and internal compliance management processes, while human oversight is indispensable at every stage.
The work doesn’t end with development; monitoring and reviewing systems is a long-term commitment that should involve relevant stakeholders. “This also means that alongside experts who deeply understand the technology, employees who use AI systems daily must also be adequately prepared. This requires leadership commitment, continuous education, and targeted training programs to help employees effectively use AI solutions. Moreover, this approach has significant retention benefits: when colleagues feel competent with new technologies, it greatly alleviates fears that algorithms could replace them at any moment,” emphasizes Gábor Gönczy.
Source: Influence Media
Leave a Reply