Anne next
Technical overview for non-techies
What's anne?
First, a question:
From a techie's perspective:
- Client-facing software (AnneNext, "the face")
- "The Dashboard"
- A bunch of servers
- Speech and the libraries around it
- Telemetry
- AI and user-focused machine learning
the client
...or what the users see
Built with
- Electron
- Node
- JavaScript
- ...and around ~3000 other
smaller libraries
may be the most "visible" part...
but it's only around 40% of the code
the dashboard
...or what makes anne behave one way or another
The dashboard is:
- The other client-facing part
- What makes Anne "tick"
- Allows us to be very user-centric:
Their Anne is their own, and they
get to decide how she behaves. - Anything can be made a setting,
and in time, most things will.
The dashboard HAS:
- Settings for every module
- Speech settings
- Fine-grained permissions
- Content / media control
the server bunch
...or the technical mumbo-jumbo that makes it all work together
server-side anatomy
- Amazon services - authentication, data storage, automation, settings (AKA an "API", but that's quite
innacurate here). - Miniflux - the server software we use for news.
- Nextcloud - the server software we use for calendars (and potentially much more).
more importantly...
- The thing that makes the dashboard work...
- Which in turn allows the AnneNext client to work.
- In a way, the "heart" of the operation.
the speech stack
...asr, tts, Classifiers and other big words. simplified, i promise :)
TTS - text to speech
- You send it text, it speaks it out
- It can be in various languages
- It can use various voices
- And we aim to make it all configurable
(via the dashboard)
ASR - automatic speech recognition
- AKA Speech to text (STT)
- You speak to it, and it turns that speech into text
- Also supports various languages
- We also aim to make it configurable via
the dashboard
the avatar library
- The part that ties ASR, TTS and the AnneNext client together.
- Where ASR brings recognition of phrases, the library makes the client "understand" the phrases - what should be shown on screen, what should Anne say back, etc.
- This is a part of the local AI (Anne comes with multiple AI elements).
A short recap
- AnneNext client reads settings from servers
- Dashboard writes settings to servers
- ASR recognizes speech
- TTS makes text into speech
- The Avatar library makes ASR and TTS
"talk" to the AnneNext client.
telemetry & AI
...or how we get to know whether a user likes something. and much more!
the data we collect...
- Is completely anonymized*
- We collect it to learn from it:
- How to improve Anne as a product
- How to improve Anne for the user
- Make sure the user is doing good
- Find ways to make their lives easier
the data we collect...
- Allows caregivers to track well-being
- Track trends across time
- Provide meaningful insights via our AI /
Analysis tools
the data we collect...
- Is a vital part of the global AI
(the one that runs on servers) - Will ultimately allow us to adapt
Anne to the users needs - "Seeing" a user's habits, Anne will
make herself more useful to
said user - The more they use her, the better
she gets
thank you!
QUESTIONS?
AnneNext overview
By Darko Bozhinovski
AnneNext overview
- 262