Try Face Login, Deepfake Detection & Payment SSO — interactive demos running on cdn.mimicx.ai
Applications
API Playground
Request
Customize the AI's behavior. Leave empty for default.
00:00
Realtime Transcription
Uses the API key selected above in the playground — streams to /api/v1/stt/stream
LIVE
00:00
Listening…
Transcript
Translation Options130+ languages · African coverage
Upload Image or Video
Drag & drop or click to upload
PNG, JPG, WEBP, MP4, WEBM — max 10MB
File sent as base64 in input_data.
Upload or Record AudioWAV · MP3 · WebM · OGG · M4A
Drag & drop or click to upload audio
WAV, MP3, WebM, OGG, M4A — max 10MB
or
0:00
Audio sent as audio_base64 in the request body.
Compare Two Faces
Upload both images
Probe: Image A
Click or drag image
JPG, PNG, WEBP
VS
Reference: Image B
Click or drag image
JPG, PNG, WEBP
Audio A
Upload audio
WAV · MP3 · WebM
or
0:00
VS
Audio B
Upload audio
WAV · MP3 · WebM
or
0:00
0.60(score ≥ threshold = match)
Enroll Subject
Upload subject image
PNG, JPG, WEBP — max 10MB
Upload audio
WAV · MP3 · WebM · OGG · M4A
or
0:00
Match Against Gallery
Upload probe image
Upload audio
WAV · MP3 · WebM · OGG · M4A
or
0:00
Audio A
Upload audio
WAV · MP3 · WebM
or
0:00
VS
Audio B
Upload audio
WAV · MP3 · WebM
or
0:00
0.80(score ≥ threshold = match)
Identify: Open-Set Search
Upload probe image
Upload audio
WAV · MP3 · WebM · OGG · M4A
or
0:00
Audio A
Upload audio
WAV · MP3 · WebM
or
0:00
VS
Audio B
Upload audio
WAV · MP3 · WebM
or
0:00
0.80(score ≥ threshold = match)
Delete Enrollment
This permanently removes the subject's biometric template from the gallery.
Response
Waiting for request...
Audio Output
Select an endpoint to see code examples
API Keys
API keys are shown only once at creation. Store them securely.
Quick Start
Get started with mimicxai in under 5 minutes.
1. Create an application
Click New Application, select the model families and tasks your app needs (biometrix, darwin, emoticore), choose your environment (test/live), and save your API key.
2. Integrate
Python
3. Gateway Endpoints
POST https://model.mimicx.ai/api/v1/predict — Run inference on any model POST https://model.mimicx.ai/api/v1/generate — Generate text (Darwin models) GET https://model.mimicx.ai/api/v1/models — List models available to your key
All endpoints require Authorization: Bearer mx_live_...
Secure payments powered by Stripe. Pay with card, PayPal, Google Pay, or Apple Pay. Cancel anytime.
Organization
O
My Organization
Enterprise Plan
1 members0 pending invites
Members
Pending Invites
No pending invites
Checking email config...
DarwinDNA: Physical AI Brain
Flash DarwinDNA as a standalone AI operating system on embedded hardware. No cloud required.
BETA
DarwinDNA OS v1.0
Your AI. Your hardware. No internet needed.
DarwinDNA compiles the Darwin inference engine into a lightweight OS image that runs natively on Raspberry Pi, and firmware for Arduino & ESP32 microcontrollers — biometrics, reasoning, TTS & STT, all local.
Raspberry Pi
Full OS image
Arduino
Sensor firmware
ESP32
Wi-Fi AI node
Pi 4B
4 cores · up to 8GB RAM
Recommended
Pi 5
4 cores · 4–16GB RAM
Fastest
Pi 3B+
4 cores · 1GB RAM
Lightweight mode
Pi Zero 2W
4 cores · 512MB RAM
Minimal
Select modules to embed
Flash instructions: Raspberry Pi 4B
1
Download DarwinDNA OS image
A customized Raspberry Pi OS image with Darwin pre-installed, model weights bundled and systemd services configured.
Checking available builds…
SHA-256: checking…
2
Flash to microSD (≥16 GB, class 10)
Use Raspberry Pi Imager or balenaEtcher to write the image to your card.
# Using rpi-imager CLI (recommended)
rpi-imager --cli darwin-dna-rpi4-v1.0.img.xz /dev/sdX
# Or with dd (advanced)
xz -d darwin-dna-rpi4-v1.0.img.xz
sudo dd if=darwin-dna-rpi4-v1.0.img of=/dev/sdX bs=4M status=progress conv=fsync
3
Pre-configure Wi-Fi & API key (optional, headless)
Mount the boot partition and edit the config file before first boot.
Insert the SD card, power on your Pi. DarwinDNA auto-starts on boot. Access the local dashboard at http://darwin-brain.local or the IP shown on HDMI.
# Test from any device on same Wi-Fi
curl http://darwin-brain.local:8080/health
# Run inference locally
curl -X POST http://darwin-brain.local:8080/api/chat \
-H "Content-Type: application/json" \
-d '{"message":"Hello Darwin","stream":false}'
5
Connect Arduino/ESP32 sensors (optional)
Pair embedded microcontrollers as peripheral sensors via USB serial or Wi-Fi MQTT. The Pi acts as the AI brain; sensors stream raw data up.
Arduino acts as a peripheral sensor MCU — it reads sensors (PIR, ultrasonic, fingerprint scanner, IMU, microphone) and streams structured data to the Raspberry Pi Darwin brain over USB Serial or I²C. The Pi handles all AI inference; Arduino handles real-time sensor I/O.
Arduino Uno
ATmega328P · USB-B
Mega 2560
54 digital I/O pins
Nano
Compact · breadboard
MKR WiFi 1010
Wi-Fi + crypto chip
Firmware sketch: Darwin sensor bridge
/*
* DarwinDNA Sensor Bridge — Arduino Firmware
* Streams sensor readings to Raspberry Pi over USB Serial
* Protocol: JSON lines at 115200 baud
*/#include"protocol.h"#include"sensors.ino"// ── Config ──────────────────────────────────────#define BAUD_RATE 115200#define SENSOR_TICK 50// ms between readings#define HEARTBEAT_MS 1000// alive ping every 1sunsigned long lastTick = 0;
unsigned long lastHB = 0;
voidsetup() {
Serial.begin(BAUD_RATE);
initSensors();
Serial.println("{\"type\":\"boot\",\"fw\":\"darwin-dna-1.0\"}");
}
voidloop() {
unsigned long now = millis();
// Read & stream sensorsif (now - lastTick >= SENSOR_TICK) {
lastTick = now;
SensorFrame f = readSensors();
sendFrame(f);
}
// Heartbeatif (now - lastHB >= HEARTBEAT_MS) {
lastHB = now;
Serial.println("{\"type\":\"hb\"}");
}
// Handle commands from Piwhile (Serial.available()) {
String cmd = Serial.readStringUntil('\n');
handleCommand(cmd);
}
}
ESP32 runs a standalone Wi-Fi AI node — it can execute ultra-lightweight Darwin Nano inference locally (no Pi needed for simple tasks), publish sensor data via MQTT to the Pi brain, or act as a wireless camera node streaming frames for remote biometric processing.
Standalone Inference
Darwin Nano model runs entirely on ESP32 — no Wi-Fi needed
MQTT Sensor Node
Streams sensor data to Pi brain via local MQTT broker
Camera Stream Node
ESP32-CAM sends frames to Pi for face/object recognition
Replace the stock GPT integration with Darwin AI — fully local inference, biometric face recognition, Darwin STT/TTS, real-time obstacle reasoning, and emotion-mapped servo actions. No OpenAI key required.
Start from the DarwinDNA Raspberry Pi image (see Deploy page). Boot the Pi, verify Wi-Fi is connected, then SSH in.
ssh pi@darwin-brain.local # default pass: darwin
2
Install PiDog Python libraries
The DarwinDNA image includes Python 3.11. Install the SunFounder stack on top — robot-hat (HAT communication), vilib (vision), and pidog (servo/sensor abstraction).
# Robot HAT driver (I²C / PWM)
cd ~/ && git clone -b v2.0 https://github.com/sunfounder/robot-hat.git
cd robot-hat && sudo pip3 install . --break-system-packages
# Vision library (camera + OpenCV)
cd ~/ && git clone -b picamera2 https://github.com/sunfounder/vilib.git
cd vilib && sudo pip3 install . --break-system-packages
# PiDog servo + sensor library
cd ~/ && git clone https://github.com/sunfounder/pidog.git
cd pidog && sudo pip3 install . --break-system-packages
# Audio i2S speaker amp
cd ~/pidog && sudo bash i2samp.sh
3
Install DarwinDog integration layer
This is the bridge between the Darwin AI services running on DarwinDNA OS and the PiDog hardware library.
pip3 install darwin-dog --break-system-packages
# Or from source
git clone https://github.com/mimicxai/darwin-dog.git
cd darwin-dog && pip3 install . --break-system-packages
4
Configure your API key and modules
# /boot/darwin-dna.conf (already set if you pre-configured the image)
DARWIN_API_KEY="mx_live_your_key_here"
MODULES="inference,biometrix,stt,tts"# darwin-dog config — /home/pi/darwin-dog.ymldarwin_url: http://localhost:8080wake_word: hey darwindog_name: Darwinstt_language: en-ustts_model: en_US-ryan-mediumface_recognition: trueobstacle_threshold_cm: 20emotion_leds: trueauto_start: true
5
Run the DarwinDog agent
python3 darwin_dog.py
# Or as a systemd service (auto-starts on boot)
sudo systemctl enable darwin-dog
sudo systemctl start darwin-dog
sudo journalctl -u darwin-dog -f # stream logs
darwin_dog.py: Full DarwinDNA × PiDog firmware
"""
darwin_dog.py — DarwinDNA × SunFounder PiDog
AI-native firmware: Darwin inference + biometrix + STT/TTS + servo actions
"""import time, threading, yaml, requests
from pidog import Pidog
from darwin_dog.stt import DarwinSTT
from darwin_dog.tts import DarwinTTS
from darwin_dog.vision import DarwinVision
from darwin_dog.action_map import ActionMap
from darwin_dog.emotion_leds import EmotionLEDs
# ── Config ───────────────────────────────────────────────────────withopen("/home/pi/darwin-dog.yml") as f:
cfg = yaml.safe_load(f)
DARWIN_URL = cfg["darwin_url"]
WAKE_WORD = cfg["wake_word"]
DOG_NAME = cfg["dog_name"]
OBS_THRESH = cfg["obstacle_threshold_cm"]
# ── Hardware init ─────────────────────────────────────────────────
dog = Pidog()
stt = DarwinSTT(DARWIN_URL, language=cfg["stt_language"])
tts = DarwinTTS(DARWIN_URL, model=cfg["tts_model"])
vision = DarwinVision(DARWIN_URL, face_recog=cfg["face_recognition"])
leds = EmotionLEDs(dog)
actions = ActionMap(dog)
# ── State ─────────────────────────────────────────────────────────
_running = True
_conversation = [] # rolling context sent to Darwin
_last_face = None
_obstacle_lock = threading.Event()
# ── Darwin chat helper ────────────────────────────────────────────defdarwin_chat(user_msg: str, image_b64: str = None) -> dict:
"""Send message to local Darwin inference, return {text, actions}"""
_conversation.append({"role": "user", "content": user_msg})
payload = {
"messages": _conversation[-10:], # keep last 10 turns"system": (
f"You are {DOG_NAME}, a friendly AI robot dog. ""Keep replies short and dog-like. After your text reply, ""output ACTIONS: followed by comma-separated action names. ""Available actions: sit, stand, walk_forward, walk_backward, ""turn_left, turn_right, wag_tail, shake_head, nod, bark, ""lie_down, stretch, paw, jump, spin, howl, happy_dance."
),
"stream": False,
}
if image_b64:
payload["image"] = image_b64
r = requests.post(f"{DARWIN_URL}/api/chat", json=payload, timeout=15)
reply = r.json()["reply"]
# Parse ACTIONS: tag
text, _, action_str = reply.partition("ACTIONS:")
action_list = [a.strip() for a in action_str.split(",")] if action_str else []
_conversation.append({"role": "assistant", "content": reply})
return {"text": text.strip(), "actions": action_list}
# ── Obstacle watchdog (background thread) ────────────────────────defobstacle_watchdog():
while _running:
dist = dog.read_distance()
if dist is not Noneand dist < OBS_THRESH:
_obstacle_lock.set()
leds.set_emotion("alert")
dog.do_action("walk_backward", step_count=2)
time.sleep(1)
_obstacle_lock.clear()
time.sleep(0.1)
# ── Touch callbacks ───────────────────────────────────────────────defon_head_touch(position):
# front = pet, rear = scratch
resp = darwin_chat(f"Someone touched my {position} head sensor.")
leds.set_emotion("happy")
actions.run(resp["actions"])
tts.speak(resp["text"])
dog.set_touch_callback(on_head_touch)
# ── Main voice loop ───────────────────────────────────────────────defvoice_loop():
print(f"[darwin-dog] Listening for wake word: '{WAKE_WORD}'")
dog.do_action("sit")
leds.set_emotion("idle")
while _running:
if _obstacle_lock.is_set():
time.sleep(0.1); continue
phrase = stt.listen_for_wake_word(WAKE_WORD)
if not phrase:
continue# Woke up — LED breathing + stand
leds.breathe("pink")
dog.do_action("stand")
tts.speak("Hi there!")
# Capture a camera frame for vision context
image_b64 = vision.capture_frame_b64() if cfg["face_recognition"] elseNone# Check if known face detectedif image_b64:
face = vision.identify_face(image_b64)
if face:
phrase = f"I see {face['name']}. {phrase}"# Listen for full command
leds.set_emotion("listening")
command = stt.listen_command(timeout=8)
if not command:
leds.set_emotion("idle")
dog.do_action("sit")
continue# Darwin thinks
leds.set_emotion("thinking")
resp = darwin_chat(command, image_b64=image_b64)
# Execute servo actions + speak simultaneously
leds.set_emotion("happy")
t_actions = threading.Thread(target=actions.run, args=(resp["actions"],), daemon=True)
t_actions.start()
tts.speak(resp["text"])
t_actions.join()
dog.do_action("sit")
leds.set_emotion("idle")
# ── Start ─────────────────────────────────────────────────────────if __name__ == "__main__":
threading.Thread(target=obstacle_watchdog, daemon=True).start()
try:
voice_loop()
except KeyboardInterrupt:
_running = False
dog.do_action("lie_down")
leds.off()
The Generic Pi Robot template gives you a blank DarwinDNA firmware skeleton — configure your own servo layout, sensors and action map. Works with any Robot HAT-compatible chassis.
Custom Build
Design your own robot hardware layout and generate a tailored DarwinDNA firmware config. Bring your own servos, sensors and chassis — we generate the firmware skeleton.
Add Team Member
Done!
Added this session
Edit Organization
Create Application
Test keys have mx_test_ prefix, live keys have mx_live_ prefix.
Leave empty for no expiration. API keys will stop working after this date.