Back to blog

Startup Captures 10,000 Hours of Brain Scans to Train AI That Converts Thoughts to Text

Hello HaWkers, the boundary between mind and machine is becoming increasingly blurred. A startup just announced it has collected over 10,000 hours of brain scans to train an artificial intelligence model capable of converting thoughts directly into text.

This technology, once confined to academic laboratories, is about to become commercially viable. Let's understand how it works, what the applications are, and most importantly, the ethical implications of this innovation.

What the Startup Developed

The company created a platform that combines non-invasive neuroimaging with advanced language models.

System characteristics:

  • Dataset: 10,000+ hours of brain scans (fMRI and EEG)
  • Participants: 2,500 volunteers
  • Current accuracy: ~82% for limited vocabulary
  • Latency: 2-3 seconds from thought to text
  • Method: Non-invasive (no implants)

How It Works

The process combines neuroimaging with sophisticated machine learning.

Conversion pipeline:

  1. User thinks of a phrase or concept
  2. Sensor captures brain activity (EEG or fMRI)
  3. Signals are preprocessed to remove noise
  4. AI model interprets neural patterns
  5. Decoder converts patterns to language tokens
  6. Language model refines output to coherent text

🧠 Important: The system doesn't "read" thoughts literally. It identifies brain patterns associated with specific linguistic concepts.

The Data Behind the Model

Model quality depends directly on the quantity and quality of training data.

Dataset Composition

Types of data collected:

Exam Type Hours Participants Resolution
fMRI 6,000 1,500 High spatial
EEG 3,500 2,000 High temporal
MEG 500 200 High temporal

Collection Protocol

Volunteers performed specific tasks during exams.

Tasks included:

  • Silent reading of texts
  • Listening to audiobooks
  • Imagining specific words
  • Mental description of images
  • Simulated internal conversations
  • Mental narration of stories

Each session lasted 2-4 hours, with breaks to avoid fatigue.

Accuracy and Limitations

The technology impresses but still has significant limitations.

What Works Well

Scenarios with high accuracy (~85-92%):

  • Restricted vocabulary (500-1000 words)
  • Simple and direct sentences
  • Concrete concepts (objects, actions)
  • Users trained on the system
  • Controlled laboratory environment

What Is Still Challenging

Scenarios with lower accuracy (~60-75%):

  • Open vocabulary
  • Complex or ambiguous sentences
  • Abstract concepts (emotions, ideas)
  • New users without training
  • Distracting environments

Comparison of Approaches

Invasive vs Non-invasive:

Characteristic Invasive (Neuralink) Non-invasive (This startup)
Accuracy ~95% ~82%
Speed Real-time 2-3 seconds
Vocabulary Broad Limited
Medical risk High None
Cost $50k+ (surgery) ~$500 (device)
Potential adoption Restricted Broad

Practical Applications

This technology has potential to transform several areas.

Accessibility

People with speech disabilities can gain a new voice.

Potential beneficiaries:

  • ALS patients (amyotrophic lateral sclerosis)
  • Stroke victims with aphasia
  • People with cerebral palsy
  • Patients with locked-in syndrome
  • Non-verbal people with severe autism

💡 Impact: Millions of people worldwide have lost the ability to speak. This technology can restore communication.

Productivity

Professionals could "type" just by thinking.

Use scenarios:

  • Programmers writing code mentally
  • Writers capturing ideas without interruption
  • Executives dictating emails while driving
  • Surgeons documenting procedures in real-time

Gaming and Entertainment

Mental interfaces could revolutionize gaming.

Possibilities:

  • Character control by thought
  • Immersive virtual reality experiences
  • Telepathic communication in multiplayer games
  • Music composition through imagination

Implications For Developers

This technology opens new frontiers for software creation.

New APIs and SDKs

BCI (Brain-Computer Interface) companies are launching developer tools.

Conceptual integration example:

import { BrainInterface } from '@neural-sdk/brain';

// Initialize brain interface
const brain = new BrainInterface({
  device: 'headset-v2',
  mode: 'text-decode',
  language: 'en-US',
});

// Set up callback for decoded text
brain.onThoughtDecoded((result) => {
  console.log('Thought detected:', result.text);
  console.log('Confidence:', result.confidence);

  if (result.confidence > 0.8) {
    processUserInput(result.text);
  }
});

// Start listening for thoughts
async function startMindReading() {
  try {
    await brain.connect();
    await brain.calibrate({ duration: 30 }); // 30 seconds calibration

    brain.startListening({
      vocabulary: 'general',
      maxLatency: 3000,
      minConfidence: 0.7,
    });

    console.log('Ready to receive thoughts!');
  } catch (error) {
    console.error('Connection error:', error);
  }
}

This example shows how developers can integrate brain interfaces into their applications.

New UX Paradigms

Mental interfaces require rethinking user experience design.

UX considerations:

// Mental interaction model
class MentalUXManager {
  constructor() {
    this.feedbackModes = ['visual', 'audio', 'haptic'];
    this.confirmationRequired = true;
  }

  // Thoughts need confirmation before action
  async processThought(thought) {
    // Show what was interpreted
    this.showPreview(thought.text);

    // Wait for confirmation (another thought or gesture)
    const confirmed = await this.waitForConfirmation({
      timeout: 5000,
      methods: ['blink-twice', 'think-yes', 'nod'],
    });

    if (confirmed) {
      return this.executeAction(thought);
    }

    return this.cancelAction(thought);
  }

  // Continuous feedback is essential
  provideFeedback(status) {
    this.feedbackModes.forEach(mode => {
      switch (mode) {
        case 'visual':
          this.updateUI(status);
          break;
        case 'audio':
          this.playTone(status);
          break;
        case 'haptic':
          this.vibrate(status);
          break;
      }
    });
  }
}

Ethical and Privacy Issues

Technology that reads minds raises serious concerns.

Thought Privacy

Thoughts are the last refuge of absolute privacy. This technology changes that.

Main concerns:

  • Who has access to brain data?
  • Can thoughts be used as evidence?
  • Could employers require mental monitoring?
  • Could authoritarian governments abuse the technology?
  • Could hackers "invade" thoughts?

Consent and Control

Critical questions:

Question Implication
Can I turn it off at any time? User autonomy
Is my data stored? Data privacy
Can it be sold to third parties? Mind commercialization
Can I delete my neural profile? Right to be forgotten
Can children be monitored? Minor privacy

Emerging Regulation

Governments are starting to create laws for neurotechnology.

Regulatory movements:

  • Chile: First constitution to protect "neurorights"
  • EU: Proposed BCI regulation under AI Act
  • USA: Bills under discussion in Congress
  • Brazil: No specific legislation yet

⚠️ Warning: Technology advances faster than regulation. Developers have ethical responsibility.

The Neurotechnology Market

The sector is growing rapidly, attracting billions in investments.

Market Overview

Sector numbers:

  • Global BCI market 2025: $2.8 billion
  • 2030 projection: $8.5 billion
  • Annual growth: ~25%
  • Active startups: 200+
  • Total investment 2020-2025: $6.2 billion

Main Players

Featured companies:

Company Approach Investment Status
Neuralink Invasive $363M Human trials
Synchron Semi-invasive $145M FDA approval
Kernel Non-invasive $107M Commercial
NextMind (Meta) Non-invasive Acquired Integrated
This startup Non-invasive $85M Pre-commercial

Career Opportunities

The sector is actively hiring.

In-demand profiles:

  • Computational neuroscientists
  • ML engineers specializing in signals
  • Firmware developers for wearables
  • UX specialists for mental interfaces
  • Technology ethicists
  • Neurotech regulatory specialists

The Future of Mind-Machine Interface

The coming years should bring significant advances.

Technology Roadmap

Expected evolution:

Year Milestone Capability
2025 Current 82% accuracy, limited vocabulary
2026 v2.0 90% accuracy, 5000 words
2027 v3.0 Complex sentences, emotions
2028 v4.0 Fluid conversation
2030 v5.0 Thought to code
2035 Mature Digital telepathy

Convergence With Other Technologies

Future synergies:

  • Generative AI: Complete incomplete thoughts
  • Augmented Reality: Mental control of AR interfaces
  • Robotics: Control of prosthetics and exoskeletons
  • Metaverse: Thought-controlled avatars
  • Medicine: Real-time mental health monitoring

Conclusion

The ability to convert thoughts into text marks an inflection point in the history of technology. For the first time, the barrier between mind and machine is being genuinely crossed in a non-invasive way.

For developers, this means new platforms, new APIs and new interaction paradigms. But it also means responsibility: technology that accesses the human mind requires an unprecedented level of ethical care.

If you want to follow other innovations shaping the future of technology, I recommend checking out the article Data Storage For Billions of Years where we explore another technology that seems like science fiction.

Let's go! 🦅

🎯 Join Developers Who Are Evolving

Thousands of developers already use our material to accelerate their studies and achieve better positions in the market.

Why invest in structured knowledge?

Learning in an organized way with practical examples makes all the difference in your journey as a developer.

Start now:

  • 1x of $4.90 on card
  • or $4.90 at sight

🚀 Access Complete Guide

"Excellent material for those who want to go deeper!" - John, Developer

Comments (0)

This article has no comments yet 😢. Be the first! 🚀🦅

Add comments