Skip to main content
All Collections
Understanding AI and Prompting
Understanding AI and Prompting

How does prompting work and what is few-shot prompting

Marco Ferraz avatar
Written by Marco Ferraz
Updated over 3 months ago

We’ve heard from some of our users reaching out about sections in reports generated by OurMind that seemed…well, unexpected. Before we dive into the details, let me assure you: this is NOT a data breach, and these bits of text are NOT coming from other patients or consultations.

OurMind takes data security seriously. No user’s data is ever shared or accessible to another user.

So, what’s going on?

To explain what’s happening, let’s briefly discuss two key AI concepts: Prompting and Few-Shot Prompting. Don’t worry – it’s less techy than it sounds (and way more fun).

What’s prompting?

Imagine you’re explaining something to a friend. You might say: “Explain photosynthesis to a 10-year-old.” That’s a prompt – simple instructions to guide the conversation.

OurMind works the same way. We guide the AI with custom prompts to ensure it generates accurate, professional, and helpful medical reports for your consultations.

What’s few-shot prompting?

This one’s like teaching your AIOS (or anyone, really) by example. You show them a few sample reports so they understand the pattern and can replicate it.

At OurMind, we provide the AI with examples of medical reports tailored to your profession or specialty to guide its learning. As you upload your own examples, we gradually prioritize them over the default ones. This means OurMind learns directly from you, adapting to your unique style and preferences. Cool, right?

Okay…so why is OurMind writing stuff we didn’t discuss?

Now that you’re on your way to become an AI expert 😉, you’ve probably figured it out: sometimes, OurMind accidentally pulls content from the examples (yours or the default ones) into the report.

This isn’t a hallucination (AI-speak for making stuff up). It’s called Prompt Leakage or Example Leakage – when the AI mistakenly mixes example content into the final output. Think of it as the AI getting a little too attached to its homework examples.

What are we doing about it?

We see this leakage as a new type of bug, not a major system failure, but we take it seriously. Our team is already building a leakage detection system to catch and fix these slip-ups before they ever reach you.

In the meantime, if you spot this issue, please let us know through our service desk. Your feedback helps us monitor and improve the system faster.

Got More Questions?

We’re here to help! Whether it’s about this issue or anything else OurMind-related, just reach out to our team in the bottom right corner of our application.

Thanks for using OurMind!

Did this answer your question?