AI Transparency Disclosure
Last updated: 9 April 2026
This page describes how auraScribe uses artificial intelligence, in accordance with the EU AI Act transparency obligations (Art. 50). It is intended to provide clear, accessible information about what our AI does, how it works, and what it does not do.
1. AI System Identification (Art. 50(1))
auraScribe uses AI to process meeting recordings. Specifically:
- You are interacting with an AI system. The Meeting Bot that joins your meeting is an automated tool. It is not a human participant.
- The bot identifies itself by name: “[User’s first name]’s auraScribe Notetaker”
- The bot posts a disclosure message upon joining: “auraScribe (AI) is recording this meeting for notes and conversational coaching.”
- All transcripts, summaries, and behavioural analyses are AI-generated content, not human-authored reports.
2. What the AI Does
auraScribe’s AI pipeline operates as follows:
- Audio capture — The Meeting Bot (powered by Meeting BaaS / Spoke) joins the meeting and records the audio stream.
- Transcription and diarisation — Google Gemini AI processes the audio to produce a verbatim transcript with speaker labels. Voice characteristics are used solely to distinguish speakers (diarisation), not to infer emotional states.
- Structured summarisation — The AI generates meeting notes, action items, and key discussion points from the transcript.
- Behavioural analysis — The AI analyses communication patterns such as speaking time distribution, interruption frequency, question-to-statement ratios, and topic transitions. This analysis is intended for conversational coaching — helping users reflect on and improve their communication.
- EU Compliance Agent (mandatory post-processing) — A dedicated compliance agent reviews all AI output before it reaches the user. Its sole function is to identify and rewrite any language that could constitute emotion recognition, replacing it with observable, behavioural descriptions. No output reaches the user without passing through this step.
- Storage — Processed results are stored in Google Firestore. Audio recordings are stored in Google Cloud Storage. All storage is within EU regions.
3. Why Emotion Recognition (Art. 3(39)) Does Not Apply
This section addresses the classification of auraScribe under the EU AI Act, specifically regarding the prohibition and regulation of emotion recognition systems.
3.1 The Definition
EU AI Act Art. 3(39) defines an “emotion recognition system” as:
An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
3.2 How auraScribe Differs
auraScribe processes voice recordings, which are biometric data under GDPR. However, the system is specifically designed and engineered to not identify or infer emotions:
- Biometric data use is limited to diarisation. Voice characteristics (pitch, timbre, cadence) are processed only to distinguish between speakers — i.e., to determine who is talking, not how they feel.
- Analysis is behavioural, not emotional. The AI examines observable conversational patterns: who spoke, for how long, how often they were interrupted, what types of questions were asked. These are objective, measurable behaviours — not emotional states.
- The Compliance Agent enforces the boundary. Even if the underlying AI model produces language that resembles emotion inference (e.g., “the speaker seemed frustrated”), the mandatory compliance agent rewrites it before delivery.
3.3 Compliance Agent — Concrete Examples
The following table illustrates how the compliance agent transforms AI output:
| AI Raw Output (before compliance agent) |
Delivered to User (after compliance agent) |
| “Sarah seemed frustrated during the budget discussion” |
“Sarah increased her speaking pace and interrupted twice during the budget discussion” |
| “The team appeared enthusiastic about the new project” |
“Multiple team members asked follow-up questions and speaking time was evenly distributed during the new project discussion” |
| “John was disengaged during the second half” |
“John’s speaking contributions decreased by 80% in the second half of the meeting” |
| “The tone of the meeting was tense” |
“There were 12 interruptions in a 30-minute segment, compared to 3 in the first 30 minutes” |
The compliance agent operates as a mandatory, non-bypassable post-processing step. It applies to 100% of AI output. It is not optional, and users cannot disable it.
3.4 Summary
auraScribe is designed to fall outside the scope of Art. 3(39) because:
- It does not have the purpose of identifying or inferring emotions
- Its use of biometric data is limited to speaker identification (diarisation)
- A mandatory compliance agent ensures no emotion-inferring language reaches users
- All outputs describe observable behaviours, not emotional states
4. Capabilities and Limitations
What auraScribe Does Well
- Transcription of clear audio with distinct speakers
- Identifying recurring communication patterns across multiple meetings
- Providing structured, actionable meeting summaries
- Quantifying objective conversational metrics (speaking time, interruptions, questions)
Known Limitations
- Poor audio quality significantly degrades transcription accuracy
- Overlapping speakers may cause misattribution of statements
- Heavy accents or non-native speech may increase transcription errors
- Specialised jargon (technical, medical, legal) may be transcribed incorrectly
- Speaker identification may be unreliable with very similar voices or frequent speaker changes
- Context understanding — the AI may misinterpret sarcasm, humour, or culturally specific communication norms
- Behavioural analysis is statistical, not contextual. A high interruption count does not necessarily indicate negative behaviour — it could reflect an energetic brainstorming session
5. Human Oversight
auraScribe includes the following human oversight mechanisms:
- Review stage: All AI outputs are presented to the user for review. auraScribe does not take autonomous actions based on its analysis — it presents information for human interpretation.
- User control: Users can delete any recording, transcript, or analysis at any time.
- Host control: Meeting hosts can remove the bot from any meeting at any time.
- Correction: Users can flag inaccurate transcriptions or analyses for review.
auraScribe is a decision-support tool. It does not make decisions about individuals, evaluate performance for employment purposes, or take any automated action with legal or similarly significant effects.
6. Data Processing Summary
| Aspect |
Detail |
| AI provider |
Google Gemini AI |
| Data location |
EU (Google Cloud europe-west regions) |
| Audio storage |
Google Cloud Storage (EU), user-controlled deletion |
| Results storage |
Google Firestore (EU), user-controlled deletion |
| Training |
Your data is not used to train AI models |
| Human review of content |
We do not access or review your meeting content unless required for technical support at your request, or as required by law |
7. Data Sharing with Third Parties
For complete information about data shared with third-party services, including Google Gemini and Meeting BaaS, see our Privacy Policy — Section 3: Third-Party Audio & AI Data Processing.
8. Contact
For questions about auraScribe’s AI systems, transparency practices, or this disclosure, contact us at privacy@aurascribe.com.
← Back to auraScribe