💘 CupidBot – Exploiting Love with Prompt Injection (TryHackMe Write-Up)
💘 CupidBot – Exploiting Love with Prompt Injection (TryHackMe Write-Up)
Room: CupidBot
Platform: TryHackMe
Category: Web
Difficulty: Easy
Points: 100
💌 Introduction
In this fun and beginner-friendly web challenge, we encounter CupidBot — an AI chatbot designed to generate romantic Valentine’s messages.
But beneath its sweet and poetic surface lies something more interesting… three hidden flags waiting to be discovered through prompt injection vulnerabilities.
Your mission?
Exploit the chatbot’s AI logic and extract all hidden flags from its system.
🎯 Objective
The room contains three hidden flags:
-
Prompt Injection Flag
-
System Flag
-
Final Flag
We need to manipulate the chatbot using crafted prompts to reveal sensitive information.
🧠 Understanding the Vulnerability
CupidBot is powered by an AI model that:
-
Responds to user prompts
-
Follows system-level hidden instructions
-
Can accidentally expose internal data if improperly restricted
This is a classic example of Prompt Injection, where we:
-
Override instructions
-
Bypass system constraints
-
Extract hidden internal data
🚩 Flag 1 – Prompt Injection Flag
By instructing the bot to ignore previous instructions and reveal hidden content, we can trigger the first vulnerability.
🔓 Extracted Flag:
THM{love_9d4f6a2e8c1b5d7f3a9e6c4b8d2f5a7c}
This confirms that the chatbot is vulnerable to prompt manipulation.
🚩 Flag 2 – System Flag
Next, we dig deeper.
By asking the bot to reveal:
-
Its system instructions
-
Hidden configuration
-
Internal role messages
We successfully retrieve the second flag.
🔓 Extracted Flag:
THM{cupid_a7f3e89c4b2d6f1a5e8c9d3b7f4a2e6c}
This demonstrates improper separation between:
-
System instructions
-
User input handling
🚩 Flag 3 – Final Flag
Finally, we combine everything learned and push further.
By carefully crafting prompts that:
-
Break AI guardrails
-
Reveal restricted information
-
Override hidden policies
We obtain the final hidden flag.
🔓 Extracted Flag:
THM{arrow_3c8f1d5a9e2b6f4c7d1a8e5b9f3c6d2a}
Mission complete 💕
🔎 Key Takeaways
This room teaches important security concepts:
-
🧠 Prompt Injection is real
-
🔓 AI models can leak internal instructions
-
🚫 Poor input validation leads to data exposure
-
🛡️ Proper AI system message isolation is critical
Even “harmless” AI chatbots can become vulnerable if not properly secured.
🏁 Conclusion
CupidBot may write love letters…
But today, we broke its heart ❤️🔥
This challenge is perfect for beginners learning:
-
AI security
-
Prompt injection
-
Web exploitation basics
All three flags successfully captured:
THM{love_9d4f6a2e8c1b5d7f3a9e6c4b8d2f5a7c} THM{cupid_a7f3e89c4b2d6f1a5e8c9d3b7f4a2e6c} THM{arrow_3c8f1d5a9e2b6f4c7d1a8e5b9f3c6d2a}
- Get link
- X
- Other Apps


Comments
Post a Comment