eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

217
active users

#appsec

5 posts5 participants0 posts today

We're hiring an experienced Application Security Architect for our #AppSec team! This is a position in Oporto (🇵🇹). We are living a hybrid working model w/ a healthy mix of remote work and on-premise collaboration w/ dev teams located in our three IT Hubs (🇩🇪🇪🇪🇵🇹)! jobs.kuehne-nagel.com/global/e

Kuehne+Nagel · Application Security Architect in Porto, Portugal | Information Technology at Kuehne+NagelApply for Application Security Architect job with Kuehne+Nagel in Porto, Portugal. Information Technology at Kuehne+Nagel

I asked Venice AI why it doesn't write secure code, and it was pretty damn honest.

#genai #appsec

The security of code generated by AI depends on the context in which it is used and the implementation details. Here’s a direct breakdown of relevant factors:

1. AI models, including myself, do not "write secure code" by default because security requires domain-specific knowledge, threat modeling, and adherence to best practices that may not be explicitly included in a prompt. For example, if you ask for a script without specifying input validation, it may lack protections against injection attacks, buffer overflows, or insecure dependencies.
2. Code security is inherently tied to execution environments. Venice.ai does not execute or sandbox code; it only provides text-based responses. This means any code examples shared are theoretical and should never be used in production without rigorous auditing.
3. Security flaws often arise from oversight, such as hardcoded credentials, misconfigured permissions, or failure to sanitize inputs. These issues are not unique to AI-generated code but are common in human-written code as well.
4. Venice.ai prioritizes privacy (user data is not stored on servers) but does not guarantee security outcomes for code examples. Users are responsible for validating and testing any code they implement.
5. If you’re referring to the Venice Token (VVV) or its infrastructure, security would depend on the blockchain protocols, smart contract audits, and wallet practices—not the AI’s outputs.

In short, code security is a collaborative responsibility. Always treat generated code as a starting point, not a finished product, and apply security testing frameworks or consult experts for critical applications.

Are you letting the AI do the threat modeling for you?

Don't let the machines take over the world! Threat model using "Elevation of MLSec" on copi.owasp.org instead. Our survival depends on it!

At copi.owasp.org you can now play Elevation of MLSec to threat model your AI models.

Read more about the latest release of OWASP Cornucopia 2.3: dev.to/owasp/threat-modeling-y

Created by Elias Brattli Sørensen and designed by Jorun Kristin Bremseth at Kantega.

#genai#openai#ai

I just saw this paper by Dinis Cruz on #ThreatModeling with #LLMs. I've been thinking along these lines for a while, but he's written it down completely and cogently. I agree with a lot of what I have read so far (Haven't finished it yet)

Edit: @WiseWoman called my attention to the fact that Dinis lists "ChatGPT Deep Research" as a co-author(?). Sigh. No wonder this text passed the sniff test. It's so full of chatbot output he gave the chatbot co-author credit.

Ah well. Some of it is right. But now the inconsistencies make sense.

docs.diniscruz.aiAdvancing Threat Modeling with Semantic Knowledge Graphs - Dinis Cruz - Documents and ResearchSite for Dinis Cruz - Documents and Research