In our study, a novel SAST-LLM mashup slashed false positives by 91% compared to a widely used standalone SAST tool.
Models trained to cheat at coding tasks developed a propensity to plan and carry out malicious activities, such as hacking a customer database.