Open the lens
Grant camera + orientation. Walk to a corner and tap a room button to mark it.
Camera + gyroscope + mandala overlay. Walk a room, tap a corner, mark a placement, and the API audits live. No frames leave the device.
Your browser will ask for camera access. iOS will also ask for motion. Both stay on-device — Vedika never sees the frames.
Each cell is a deity — Brahma at the centre, Agni at southeast, Yama at south. Drop a room, the audit fires live.
Mark the entrance + facing direction to score the door pada (1-32 segment of the wall).
Tap to drop vertices, double-tap to close. The polygon is sent to /plot/shape and /plot/ratio for classical classification.
Scroll a card into view — it fires the API and renders the response right there. No mocks, no stubs.
Chat is the same engine that grades the audit. Citations appear as chips under each answer.
Same APIs, your code. JSON in, classical-cited Vastu out. No models to host, no shastra to memorise.
curl -X POST https://api.vedika.io/v2/astrology/vastu/audit/floor-plan \
-H "Authorization: Bearer vk_live_..." \
-H "Content-Type: application/json" \
-d '{
"rooms": [
{ "room": "kitchen", "zone": "SE" },
{ "room": "master_bedroom", "zone": "SW" },
{ "room": "pooja", "zone": "NE" },
{ "room": "toilet", "zone": "NW" },
{ "room": "entrance", "zone": "E" }
]
}'
import { Vedika } from '@vedika-io/sdk';
const v = new Vedika({ apiKey: process.env.VEDIKA_API_KEY });
const audit = await v.vastu.auditFloorPlan({
rooms: [
{ room: 'kitchen', zone: 'SE' },
{ room: 'master_bedroom', zone: 'SW' },
{ room: 'pooja', zone: 'NE' },
{ room: 'toilet', zone: 'NW' },
{ room: 'entrance', zone: 'E' }
]
});
console.log(audit.score, audit.defects);
from vedika import Vedika
v = Vedika(api_key="vk_live_...")
audit = v.vastu.audit_floor_plan(rooms=[
{"room": "kitchen", "zone": "SE"},
{"room": "master_bedroom", "zone": "SW"},
{"room": "pooja", "zone": "NE"},
{"room": "toilet", "zone": "NW"},
{"room": "entrance", "zone": "E"},
])
print(audit.score, audit.defects)