
"GPT-5-Codex introduces agentic coding with cloud hand-offs. GitHub integration catches bugs and backward compatibility issues. Usage of Codex surged 10x among developers in a month. OpenAI today announced that GPT-5, its latest-generation large language model, has been optimized for coding and is now available in Codex. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)"
"Trained on actual coding tasks OpenAI said that it was trained on actual coding tasks, including building full projects from scratch, adding features, incorporating test validations, actual debugging, doing large-scale refactors (restructuring, consolidating, moving, renaming, but in a way that nothing breaks), and code reviews. It also follows prepared instructions presented in the file. is the name of a file that is kind of a readme for AIs."
GPT-5 has been optimized for coding and is now available within Codex, introducing agentic coding with cloud hand-offs and GitHub integration that detects bugs and backward-compatibility issues. Training encompassed real engineering tasks including building full projects, adding features, writing tests, debugging, large-scale refactors, and code reviews to improve steerability and instruction adherence. Benchmark results show superior backend Python code-review performance and the ability to find tricky bugs other bots missed. Adoption surged roughly tenfold among developers over a month, and the model aims to produce higher-quality code with minimal instruction on style or cleanliness.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]