Back to blog

Brazilian Professor Wins UNESCO Award for AI Ethics Research: The Future of Responsible Technology

Hey developers, news worth highlighting: a Brazilian professor just received a UNESCO award for pioneering research on artificial intelligence ethics. At a time when generative AI is everywhere and companies like OpenAI, Anthropic, and Google are in a technological arms race, having researchers focused on ethics has never been more crucial.

Have you stopped to think that the AI you use every day was trained with data from billions of people without explicit consent? Or that algorithms deciding whether you get a loan may have built-in racial or socioeconomic biases?

The Context: Why AI Ethics Matters Now More Than Ever

We're in 2025 and AI is no longer science fiction or lab experiment – it's literally everywhere:

AI is Everywhere

In your daily life:

  • ChatGPT, Claude, Gemini: billions of conversations per day
  • YouTube, Netflix, Spotify recommendations
  • Instagram, TikTok, Twitter/X feeds
  • Credit and loan approval systems
  • Medical diagnoses and exam analysis
  • Hiring and resume screening
  • Criminal justice and bail systems
  • Autonomous cars making life-or-death decisions

Scale of use:

  • ChatGPT: 200+ million weekly active users
  • GitHub Copilot: 50+ million developers
  • Midjourney/DALL-E: 30+ million creators
  • AI in healthcare: affecting 500+ million patients
  • AI in finance: processing trillions in transactions

The Problems Are Already Happening

It's not futuristic speculation. AI's ethical problems already affect real people:

Algorithmic Bias:

  • Amazon's hiring system discriminated against women (discontinued 2018)
  • Facial recognition algorithms have 34% higher error rate for Black people
  • Credit systems disproportionately deny loans to minorities
  • COMPAS criminal justice algorithm classified Black defendants as "high risk" 2x more than whites

Privacy and Consent:

  • Models trained with scraped internet data without permission
  • Private Instagram images used to train Stable Diffusion
  • GitHub code used to train Copilot (license violations?)
  • Authors' books used without compensation or authorization

Disinformation and Manipulation:

  • Ultra-realistic deepfakes (fake videos of politicians)
  • AI bots generating disinformation at scale
  • Micro-targeted political propaganda with AI
  • Fake news generated faster than fact-checking can keep up

Impact on Work:

  • Millions of jobs at risk (call centers, translation, content creation)
  • Replacement without transition plan for workers
  • Wealth concentration in AI companies

The Brazilian Researcher's Work

The UNESCO-awarded professor developed an ethical framework being adopted by governments and companies worldwide. Let's understand the pillars of this research:

1. Transparency and Explainability

The problem:

  • Deep learning models are "black boxes"
  • Impossible to know WHY the model made a particular decision
  • Users don't know when they're interacting with AI

The proposed solution:

  • Right to explanation: users should know why a decision was made
  • Mandatory documentation of how models were trained
  • Disclosure when content was AI-generated
  • Independent audit of critical systems

Practical implementations:

  • EU AI Act: requires explainability for high-risk systems
  • Brazil: bill inspired by the research
  • Companies: starting to publish "model cards" with detailed information

2. Fairness and Non-Discrimination

The problem:

  • AI learns biases from training data
  • Historical data reflects past discrimination
  • Bias amplified when model is applied at scale

The proposed solution:

  • Mandatory fairness testing before deployment
  • Balanced and representative datasets
  • Continuous monitoring of outcomes by demographics
  • Active correction of identified biases

Developed techniques:

  • Fairness metrics: disparate impact, equalized odds
  • De-biasing algorithms
  • Adversarial debiasing
  • Causal fairness analysis

3. Privacy and Data Protection

The problem:

  • Models memorize training data
  • Possible to extract sensitive information from models
  • Gigantic datasets scraped without consent

The proposed solution:

  • Differential privacy: add "noise" that protects individuals
  • Federated learning: train without centralizing data
  • Right to be forgotten: remove data from models
  • Explicit consent for data use

Emerging technologies:

  • Homomorphic encryption: compute on encrypted data
  • Secure multi-party computation
  • Synthetic data generation
  • Privacy-preserving machine learning

4. Accountability and Governance

The problem:

  • Who's responsible when AI errs and causes harm?
  • AI companies operate without adequate oversight
  • Lack of consistent global regulation

The proposed solution:

  • Mandatory registration of high-risk AI systems
  • Independent third-party audit
  • Clear liability framework
  • Multidisciplinary ethics committees

Regulatory frameworks:

  • EU AI Act (2024): first comprehensive AI law
  • Brazil: discussions based on award-winning research
  • USA: Executive Order on AI (2023)
  • China: sector-specific regulations

5. Human Agency and Oversight

The problem:

  • AI making critical decisions without human supervision
  • Humans "rubber stamping" AI decisions without questioning
  • Erosion of human skills from AI dependence

The proposed solution:

  • "Human in the loop" for critical decisions
  • Right to human review of automated decisions
  • Training professionals to supervise AI
  • Preservation of human override capability

The Global AI Ethics Landscape

The UNESCO-awarded Brazilian research is part of a growing global movement:

Paris AI Action Summit (February 2025)

Participants:

  • 50+ countries including Brazil
  • BigTech leaders (OpenAI, Google, Meta, Anthropic)
  • Academia and civil society
  • NGOs and human rights organizations

Themes discussed:

  • Balancing innovation and regulation
  • Global AI governance
  • Prevention of malicious use
  • Equitable distribution of benefits

Outcomes:

  • Commitment to common ethical principles
  • International cooperation mechanism
  • Funding for safe AI research

Legislation Around the World

Europe (EU AI Act - 2024):

Risk classification:

Unacceptable Risk (Prohibited):

  • Social scoring by governments
  • Subliminal manipulation
  • Exploitation of group vulnerabilities

High Risk (Strict Regulation):

  • Hiring systems
  • Credit and financial scoring
  • Law enforcement
  • Education and student evaluation
  • Critical infrastructure

Limited Risk (Transparency):

  • Chatbots: must be clear it's AI
  • Deepfakes: must be marked

Minimal Risk (Unregulated):

  • Spam filters
  • AI games

Penalties:

  • Up to €35 million or 7% of global revenue
  • Already some companies fined

Brazil (Bill Under Discussion):

Inspired by Brazilian research like the award-winning one:

  • Mandatory transparency
  • Fundamental rights impact assessment
  • Prohibition of algorithmic discrimination
  • Right to review automated decisions
  • Creation of National AI Authority

USA (Executive Order + State Legislation):

More fragmented approach:

  • Executive Order 14110 (2023): focuses on national security
  • California AI Transparency Act
  • New York: AI regulation in hiring
  • Colorado: right to opt-out of automated decisions

China (Sectoral Regulation):

  • Recommendation algorithm regulation (2022)
  • Deep synthesis rules (deepfakes)
  • Requirements for generative models
  • Censorship and government control

Specific Ethical Challenges For Developers

If you work with AI or software development, here are ethical dilemmas you'll likely face:

1. Data Collection and Use

Dilemma:

  • Your model needs massive data to work well
  • But obtaining consent from millions is impractical
  • Use public internet data without permission?

Considerations:

  • Legal ≠ Ethical
  • "Public" on the internet doesn't mean consent to train AI
  • Creative Commons and licenses should be respected
  • Sensitive data (medical, financial) requires extra protection

2. Bias and Fairness

Dilemma:

  • Your dataset reflects historical inequalities
  • Removing sensitive variables (race, gender) doesn't eliminate bias
  • Other correlated features perpetuate discrimination

Considerations:

  • Test fairness across different demographic groups
  • Trade-off between accuracy and fairness sometimes necessary
  • Document known limitations and biases
  • Continuously monitor real-world outcomes

3. Transparency vs. Intellectual Property

Dilemma:

  • Full transparency exposes your model to competitors
  • But users have right to know how it works
  • How to balance?

Considerations:

  • Publish general information without exposing exact architecture
  • Model cards: datasets, limitations, appropriate use
  • Individual decision explanations without revealing complete model
  • Open source non-critical components

4. Dual Use: Technology with Good and Bad Use

Dilemma:

  • Your tool can be used for good (medical research)
  • Or for bad (bioweapon creation, disinformation)
  • Are you responsible for misuse?

Considerations:

  • Anticipate possible malicious uses
  • Implement guardrails and safety measures
  • Deny access to identified bad actors
  • Collaborate with regulators

5. Impact on Employment

Dilemma:

  • Your AI automates tasks and increases productivity
  • But eliminates jobs of real people
  • Do you have responsibility to those affected?

Considerations:

  • Transparency about work impact
  • Support retraining programs
  • Design that augments humans instead of replacing
  • Consider gradual transition

How to Develop AI Ethically

If you work or want to work with AI, here's a practical framework:

1. Design Phase

Questions to ask:

  • Is this system necessary or can we achieve the goal without AI?
  • Who will be affected and were they consulted?
  • What are the harm risks and how can we mitigate them?
  • How do we ensure fairness across different groups?

Actions:

  • Multidisciplinary impact assessment
  • Consultation with affected stakeholders
  • Participatory design when possible
  • Define success metrics beyond accuracy (include fairness, safety)

2. Development Phase

Recommended practices:

  • Document design decisions and trade-offs
  • Test on diverse and balanced datasets
  • Implement fairness metrics from the start
  • Code review focused on ethical issues
  • Red teaming: try to break/abuse the system

Tools:

  • Fairness libraries (AIF360, Fairlearn)
  • Explainability tools (LIME, SHAP)
  • Privacy-preserving ML frameworks
  • Adversarial testing

3. Deployment Phase

Checklist:

  • Consent and transparency for users
  • Monitoring outcomes by demographics
  • Feedback and correction mechanism
  • Human oversight for critical decisions
  • Incident response plan

Documentation:

  • Public model cards
  • Datasheet for datasets
  • Known limitations
  • Appropriate and inappropriate use cases

4. Continuous Monitoring Phase

What to monitor:

  • Distribution shift: real-world change vs training data
  • Fairness metrics over time
  • User and affected party feedback
  • Unintended or malicious uses

Actions:

  • Regular retraining with updated data
  • A/B testing of fairness improvements
  • Periodic independent audits
  • Iteration based on real impact

The Future of Ethical AI

Where are we heading?

Emerging Trends

1. AI Ethics by Design

  • Ethics is not an add-on, it's fundamental
  • Tools that enforce ethical considerations
  • Industry certifications and standards

2. Explainable AI (XAI)

  • Advances in model interpretability
  • Right to explanation becoming law
  • Increasingly sophisticated tools

3. Differential Privacy Mainstream

  • Growing adoption (Apple, Google already use)
  • Performance vs. privacy trade-off improving
  • Easier-to-use frameworks

4. Multistakeholder Governance

  • Not just companies deciding
  • Civil society and affected party participation
  • Ethics committees with real power

5. AI Auditing Industry

  • New independent auditing industry
  • Ethical AI certifications
  • Public compliance reports

The Growing Role of Brazilian Research

The UNESCO award to the Brazilian researcher highlights:

  • Brazil has internationally recognized expertise
  • Brazilian research influencing global policy
  • Leadership opportunity in AI ethics in Latin America
  • Potential to export frameworks and solutions

Conclusion: Responsible Technology is the Future

UNESCO's recognition of a Brazilian researcher in AI ethics is not just a national victory – it's a sign that the world is taking these themes seriously.

For developers, designers, and everyone working with technology, the message is clear: ethics is not optional, it's essential. AI systems that don't consider ethical impact not only cause real harm to real people, but also face growing regulation and social rejection.

The good news? Developing ethical AI is not incompatible with innovation – it's the only way to create technology that truly improves people's lives in a sustainable and fair way.

If you're interested in how AI is transforming different areas of technology, I recommend checking out another article: GPT-5 vs Claude Sonnet for Coding: Which AI Generates Better Code in 2025? where you'll discover the latest comparisons between AI models for development.

Let's go! 🦅

🎯 Want to Work with AI Responsibly?

Developing ethical AI systems requires a solid programming foundation and understanding of algorithms. Deeply knowing technical fundamentals allows you to implement fairness metrics, explainability tools, and privacy-preserving techniques effectively.

Complete Study Material

If you want to master the technical fundamentals necessary to work with responsible AI:

Investment options:

  • $4.90 (single payment)

👉 Learn About JavaScript Guide

💡 Solid technical foundation is essential for implementing ethical AI

Comments (0)

This article has no comments yet 😢. Be the first! 🚀🦅

Add comments