Dcas89 PRO
Dcas89
AI & ML interests
None yet
Recent Activity
reacted
to
hassenhamdi's
post
with ๐ฅ
about 4 hours ago
Google published the paper. I shipped the code. ๐
DeepMind just released PACEvolve (Progress-Aware Consistent Evolution), a massive overhaul of the AlphaEvolve framework. It solves the critical issues of "Context Pollution" and "Mode Collapse" that have historically crippled evolutionary coding agents.
But there was no public implementation. So I built one.
Introducing OpenPACEvolve: A fully open-source, production-grade implementation of the PACEvolve framework.
๐ I engineered this framework solo, but I wasn't working alone. I orchestrated a custom coding agents powered by Claude Opus 4.5 as Engineer and Gemini Pro 3 Preview ensuring fiedelity and quallty.
By leveraging these SOTA models, I was able to translate complex theoretical research into functional, modular Python architecture in record time. This is what the future of AI engineering looks like: Human architectural oversight + AI velocity.
๐ง What OpenPACEvolve Solves: Unlike standard agents that get "stuck" in loops, this framework implements the paper's full recipe for long-horizon stability: โ
Hierarchical Context Management (HCM): Bi-level pruning to keep the agent's memory clean. โ
Momentum-Based Backtracking (MBB): Uses "power-law backtracking" to detect stagnation and force pivots. โ
Self-Adaptive Crossover: Intelligent code-sharing between parallel "islands."
๐จโ๐ป This project is more than a repo; it's a demonstration of rapid research-to-production cycles using next-gen AI workflows.
๐ Link of the paper : https://arxiv.org/abs/2601.10657
The code is live. The agents are ready. Check out the repository below. ๐
https://github.com/hassenhamdi/OpenPACEvolve
Star the repo ๐.
reacted
to
IlyasMoutawwakil's
post
with ๐ฅ
about 4 hours ago
After 2 months of refinement, I'm happy to announce that a lot of Transformers' modeling code is now significantly more torch-compile & export-friendly ๐ฅ
Why it had to be done ๐
PyTorch's Dynamo compiler is increasingly becoming the default interoperability layer for ML systems. Anything that relies on torch.export or torch.compile, from model optimization to cross-framework integrations, benefits directly when models can be captured as a single dynamo-traced graph !
Transformers models are now easier to:
โ๏ธ Compile end-to-end with torch.compile backends
๐ฆ Export reliably via torch.export and torch.onnx.export
๐ Deploy to ONNX / ONNX Runtime, Intel Corporation's OpenVINO, NVIDIA AutoDeploy (TRT-LLM), AMD's Quark, Meta's Executorch and more hardware-specific runtimes.
This work aims at unblocking entire TorchDynamo-based toolchains that rely on exporting Transformers across runtimes and accelerators.
We are doubling down on Transformers commitment to be a first-class citizen of the PyTorch ecosystem, more exportable, more optimizable, and easier to deploy everywhere.
There are definitely some edge-cases that we still haven't addressed so don't hesitate to try compiling / exporting your favorite transformers and to open issues / PRs.
PR in the comments ! More updates coming coming soon !
reacted
to
sergiopaniego's
post
with ๐ค
about 2 months ago
ICYMI, transformers v5 is out!
Grab a coffee โ and go read the announcement blog https://huggingface.co/blog/transformers-v5
Organizations
None yet