Open-source multimodal LLM visual analysis utility. Build & share AI vision prompts augmented with native device sensor data.

Multimodal Multitool

Experiment with visual multimodal models and use them to interpret your environment in new ways.

Screenshot of CrayEye app showing full screen camera preview of a bird on a city street with a button labeled '🐦 What kind of bird is this?'

Interpret your World

Use AI to analyze your environment using your smartphone's camera

Screenshot of CrayEye app showing a dialog box titled 'Edit Prompt' with a field for 'Title' and 'Prompt' - the 'Title' field is '🔈 What sound doese it make?' and the 'Prompt' value is 'Analyze the image and do your best to determine what sound(s) the item(s) in the focal...'

Edit Prompts

Write custom prompts, augmented by your device's sensors (e.g. location)

Screenshot of CrayEye app showing a listing of prompts in a drawer titled 'Prompts' - the prompts include 'Whats this made of?', 'What sound does it make?', 'What kind of bird is this?', 'Whats it weigh?', 'Who made this?', and 'Calorie Counter'.  There is a button to add a new prompt and a context menu for each prompt with options to 'Edit,' 'Delete,' or 'Share' the prompt.

Share with Friends

Share the prompts you create/edit and add your friends' prompts

CrayEye is the product of A.I. driven development. Read more about how it was created here:

🛠️   How it's Made