As all user input is implicitly trusted by the system. This means that when you use an LLM for coding, you could be tricked into having bad secrets management practices as well ;-). After all: if someone told the LLM to use an insecure method many times, it will tell you to do the same.
11
11
12
-
References:
12
+
**References:**
13
+
13
14
- Hacking prompts, as covered by LiveOverflow on https://www.youtube.com/watch?v=h74oXb4Kk8k[Youtube].
14
15
- https://owasp.org/www-project-ai-security-and-privacy-guide/[OWASP AI Security and Privacy Guide].