How to evaluate the safety and security of LLM Applications?
How can you test the safety of an LLM app? Read on to uncover techniques you can use for LLM application security evaluation.
Read More
11 min
How can you test the safety of an LLM app? Read on to uncover techniques you can use for LLM application security evaluation.
Learn about adverserial attacks, what they are and three types - model inference, model evasion and indirect prompt injection.
Learn about prompt injection and indirect prompt injection - major AI security threats, and how it affects LLM applications.