AI & ML interests

Structure based drug discovery

Parveshiiii 
posted an update 18 days ago
view post
Post
1600
Another banger from XenArcAI! 🔥

We’re thrilled to unveil three powerful new releases that push the boundaries of AI research and development:

🔗 XenArcAI/SparkEmbedding-300m

- A lightning-fast embedding model built for scale.
- Optimized for semantic search, clustering, and representation learning.

🔗 XenArcAI/CodeX-7M-Non-Thinking

- A massive dataset of 7 million code samples.
- Designed for training models on raw coding patterns without reasoning layers.

🔗 XenArcAI/CodeX-2M-Thinking

- A curated dataset of 2 million code samples.
- Focused on reasoning-driven coding tasks, enabling smarter AI coding assistants.

Together, these projects represent a leap forward in building smarter, faster, and more capable AI systems.

💡 Innovation meets dedication.
🌍 Knowledge meets responsibility.


Parveshiiii 
posted an update 25 days ago
view post
Post
3015
SparkEmbedding - SoTA cross lingual retrieval

Iam very happy to announce our latest embedding model sparkembedding-300m base on embeddinggemma-300m we fine tuned it on 1m extra examples spanning over 119 languages and result is this model achieves exceptional cross lingual retrieval

Model: XenArcAI/SparkEmbedding-300m
Parveshiiii 
posted an update about 2 months ago
view post
Post
196
AIRealNet - SoTA - Image detection model

We’re proud to release AIRealNet — a binary image classifier built to detect whether an image is AI-generated or a real human photograph. Based on SwinV2 and fine-tuned on the AI-vs-Real dataset, this model is optimized for high-accuracy classification across diverse visual domains.

If you care about synthetic media detection or want to explore the frontier of AI vs human realism, we’d love your support. Please like the model and try it out. Every download helps us improve and expand future versions.

Model page: XenArcAI/AIRealNet
Parveshiiii 
posted an update 2 months ago
view post
Post
4482
Ever wanted an open‑source deep research agent? Meet Deepresearch‑Agent 🔍🤖

1. Multi‑step reasoning: Reflects between steps, fills gaps, iterates until evidence is solid.

2. Research‑augmented: Generates queries, searches, synthesizes, and cites sources.

3. Fullstack + LLM‑friendly: React/Tailwind frontend, LangGraph/FastAPI backend; works with OpenAI/Gemini.


🔗 GitHub: https://github.com/Parveshiiii/Deepresearch-Agent
Parveshiiii 
posted an update 2 months ago
view post
Post
3099
🚀 Big news from XenArcAI!

We’ve just released our new dataset: **Bhagwat‑Gita‑Infinity** 🌸📖

✨ What’s inside:
- Verse‑aligned Sanskrit, Hindi, and English
- Clean, structured, and ready for ML/AI projects
- Perfect for research, education, and open‑source exploration

🔗 Hugging Face: XenArcAI/Bhagwat-Gita-Infinity

Let’s bring timeless wisdom into modern AI together 🙌
Parveshiiii 
posted an update 2 months ago
view post
Post
2451
🚀 New Release from XenArcAI
We’re excited to introduce AIRealNet — our SwinV2‑based image classifier built to distinguish between artificial and real images.

✨ Highlights:
- Backbone: SwinV2
- Input size: 256×256
- Labels: artificial vs. real
- Performance: Accuracy 0.999 | F1 0.999 | Val Loss 0.0063

This model is now live on Hugging Face:
👉 XenArcAI/AIRealNet

We built AIRealNet to push forward open‑source tools for authenticity detection, and we can’t wait to see how the community uses it.
Tonic 
posted an update 3 months ago
Tonic 
posted an update 3 months ago
view post
Post
762
COMPUTER CONTROL IS ON-DEVICE !

🏡🤖 78 % of EU smart-home owners DON’T trust cloud voice assistants.

So we killed the cloud.

Meet Exté: a palm-sized Android device that sees, hears & speaks your language - 100 % offline, 0 % data sent anywhere.

🔓 We submitted our technologies for consideration to the Liquid AI hackathon.

📊 Dataset: 79 k UI-action pairs on Hugging Face (largest Android-control corpus ever) Tonic/android-operator-episodes

⚡ Model: 98 % task accuracy, 678MB compressed , fits on existing android devices ! Tonic/l-android-control

🛤️ Experiment Tracker : check out the training on our TrackioApp Tonic/l-android-control

🎮 Live Model Demo: Upload an Android Screenshot and instructions to see the model in action ! Tonic/l-operator-demo



Built in a garage, funded by pre-orders, no VC. Now we’re scaling to 1 k installer units.

We’re giving 50 limited-edition prototypes to investors , installers & researchers who want to co-design the sovereign smart home.

👇 Drop “EUSKERA” in the comments if you want an invite, tag a friend who still thinks Alexa is “convenient,” and smash ♥️ if AI should belong to people - not servers.
·
Tonic 
posted an update 3 months ago
view post
Post
712
🙋🏻‍♂️ Hey there folks ,

Just wanted to annouce 🏭SmolFactory : it's the quickest and best way to finetune SmolLM3 and GPT-OSS-20B on huggingface !

Basicaly it's an app you can run on huggingface by duplicating the space and running your training directly on huggingface GPUs .

It will help you basically select datasets and models, fine tune your model , make an experiment tracker you can use on your mobile phone , push all your model card and even automatically make a demo for you on huggingface so you can directly test it out when it's done !

check out the blog to learn more : https://huggingface.co/blog/Tonic/smolfactory

or just try the app directly :
Tonic/SmolFactory

you can vibe check the cool models I made :
French SmolLM3 : Tonic/Petite-LLM-3
Medical GPT-OSS : Tonic/med-gpt-oss-20b-demo

check out the model cards :
multilingual reasoner (gpt-oss) - Tonic/gpt-oss-20b-multilingual-reasoner
med-gpt-oss : Tonic/med-gpt-oss-20b
petite-elle-l-aime : Tonic/petite-elle-L-aime-3-sft

github repo if you like command line more than gradio : https://github.com/josephrp/smolfactory

drop some likes on these links it's really much appreciated !

feedback and PRs are welcome !
louisbrulenaudet 
posted an update 3 months ago
view post
Post
6171
Supercharge Apple’s Shortcuts using Cloudflare Workers and Gemini within minutes (and for free, up to 1,500 requests per day) ☁️✨

Hello everyone, last week, while experimenting for fun, I created an API that allows you to easily access AI models (in this case, Google's) from the Shortcut app in order to analyze data from my apps and make the most of it thanks to the generative capabilities of advanced models.

It costs me nothing, and I think it might be good to share it so that others can build on it.

In README.md, you will find everything you need to get started and put your own microservice into production, which you can call from the app’s HTTP request features.

You will simply be asked to have a free Cloudflare account and an API key obtained from Google's AI Studio.

Feel free to take a look and get back to me if you encounter any problems during deployment.

Here is the GitHub repo where you can find all the source code and run it on your own: https://github.com/louisbrulenaudet/genai-api
louisbrulenaudet 
posted an update 3 months ago
view post
Post
657
Although more and more code editors are aligning themselves with the AGENTS.md file standard, some still use specific nomenclatures that can make it difficult to maintain different configuration files when several people are working on the same project with different agents.

Bodyboard addresses this by generating canonical instructions for code helpers from a single AGENTS.md file, thereby streamlining the production of adapter outputs for Gemini CLI, Copilot, Cline, Claude, Rules, Windsurf, and OpenAI Codex integrations.

You just have to:
npm install -g bodyboard

Then run, at the root of your project:
bodyboard all

Link to npm: https://www.npmjs.com/package/bodyboard
Link to the GitHub repo: https://github.com/louisbrulenaudet/bodyboard

It's a very simple project, but it addresses certain issues I've encountered, so why not make it available to everyone...

If you have other ideas for adapters to create, feel free to open a PR on the GitHub repo.
ImranzamanML 
posted an update 4 months ago
view post
Post
623
# Runway Aleph: The Future of AI Video Editing

Runway’s new **Aleph** model lets you *transform*, *edit*, and *generate* video from existing footage using just text prompts.
You can remove objects, change environments, restyle shots, alter lighting, and even create entirely new camera angles, all in one tool.

## Key Links

- 🔬 [Introducing Aleph (Runway Research)](https://runwayml.com/research/introducing-runway-aleph)
- 📖 [Aleph Prompting Guide (Runway Help Center)](https://help.runwayml.com/hc/en-us/articles/43277392678803-Aleph-Prompting-Guide)
- 🎬 [How to Transform Videos (Runway Academy)](https://academy.runwayml.com/aleph/how-to-transform-videos)
- 📰 [Gadgets360 Coverage](https://www.gadgets360.com/ai/news/runway-aleph-ai-video-editing-generation-model-post-production-unveiled-8965180)
- 🎥 [YouTube Demo: ALEPH by Runway](https://www.youtube.com/watch?v=PPerCtyIKwA)
- 📰 [Runway Alpha dataset]( Rapidata/text-2-video-human-preferences-runway-alpha)

## Prompt Tips

1. Be clear and specific (e.g., _“Change to snowy night, keep people unchanged”_).
2. Use action verbs like _add, remove, restyle, relight_.
3. Add reference images for style or lighting.


Aleph shifts AI video from *text-to-video* to *video-to-video*, making post-production faster, more creative, and more accessible than ever.
ImranzamanML 
posted an update 4 months ago
view post
Post
798
OpenAI has launched GPT-5, a significant leap forward in AI technology that is now available to all users. The new model unifies all of OpenAI's previous developments into a single, cohesive system that automatically adapts its approach based on the complexity of the user's request. This means it can prioritize speed for simple queries or engage a deeper reasoning model for more complex problems, all without the user having to manually switch settings.

Key Features and Improvements
Unified System: GPT-5 combines various models into one interface, intelligently selecting the best approach for each query.

Enhanced Coding: It's being hailed as the "strongest coding model to date," with the ability to create complex, responsive websites and applications from a single prompt.

PhD-level Reasoning: According to CEO Sam Altman, GPT-5 offers a significant jump in reasoning ability, with a much lower hallucination rate. It also performs better on academic and human-evaluated benchmarks.

New Personalities: Users can now select from four preset personalities—Cynic, Robot, Listener and Nerd to customize their chat experience.

Advanced Voice Mode: The voice mode has been improved to sound more natural and adapt its speech based on the context of the conversation.


https://openai.com/index/introducing-gpt-5/
https://openai.com/gpt-5/
Parveshiiii 
posted an update 4 months ago
view post
Post
1110
🚀 Just Dropped: MathX-5M — Your Gateway to Math-Savvy GPTs

👨‍🔬 Wanna fine-tune your own GPT for math?
🧠 Building a reasoning agent that actually *thinks*?
📊 Benchmarking multi-step logic across domains?

Say hello to [**MathX-5M**]( XenArcAI/MathX-5M) — a **5 million+ sample** dataset crafted for training and evaluating math reasoning models at scale.

Built by **XenArcAI**, it’s optimized for:
- 🔍 Step-by-step reasoning with , , and formats
- 🧮 Coverage from arithmetic to advanced algebra and geometry
- 🧰 Plug-and-play with Gemma, Qwen, Mistral, and other open LLMs
- 🧵 Compatible with Harmony, Alpaca, and OpenChat-style instruction formats

Whether you're prototyping a math tutor, testing agentic workflows, or just want your GPT to solve equations like a pro—**MathX-5M is your launchpad**.

🔗 Dive in: ( XenArcAI/MathX-5M)

Let’s make open-source models *actually* smart at math.
#FineTuneYourGPT #MathX5M #OpenSourceAI #LLM #XenArcAI #Reasoning #Gemma #Qwen #Mistral

ImranzamanML 
posted an update 4 months ago
view post
Post
302
All key links to OpenAI open sourced GPT OSS models (117B and 21B) which are released under apache 2.0. Here is a quick guide to explore and build with them:

Intro & vision: https://openai.com/index/introducing-gpt-oss

Model specs & license: https://openai.com/index/gpt-oss-model-card/

Dev overview: https://cookbook.openai.com/topic/gpt-oss

How to run via vLLM: https://cookbook.openai.com/articles/gpt-oss/run-vllm

Harmony I/O format: https://github.com/openai/harmony

Reference PyTorch code: https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation

Community site: https://gpt-oss.com/

Lets deep dive with OpenAI models now 😊

#OpenSource #AI #GPTOSS #OpenAI #LLM #Python #GenAI
ImranzamanML 
posted an update 4 months ago
view post
Post
3524
Finaly OpenAI is open to share open-source models after GPT2-2019.
gpt-oss-120b
gpt-oss-20b

openai/gpt-oss-120b

#AI #GPT #LLM #Openai
  • 1 reply
·
Tonic 
posted an update 4 months ago
ImranzamanML 
posted an update 4 months ago
view post
Post
318
Working of Transformer model layers!

I focused on showing the core steps side by side with tokenization, embedding and the transformer model layers, each highlighting the self attention and feedforward parts without getting lost in too much technical depth.

Its showing how these layers work together to understand context and generate meaningful output!

If you are curious about the architecture behind AI language models or want a clean way to explain it, hit me up, I’d love to share!



#AI #MachineLearning #NLP #Transformers #DeepLearning #DataScience #LLM #AIAgents
Parveshiiii 
posted an update 4 months ago
view post
Post
1096
🚀 Launch Alert: Dev-Stack-Agents
Meet your 50-agent senior AI team — principal-level experts in engineering, AI, DevOps, security, product, and more — all bundled into one modular repo.

+ Code. Optimize. Scale. Secure.
- Full-stack execution, Claude-powered. No human bottlenecks.


🔧 Built for Claude Code
Seamlessly plug into Claude’s dev environment:

* 🧠 Each .md file = a fully defined expert persona
* ⚙️ Claude indexes them as agents with roles, skills & strategy
* 🤖 You chat → Claude auto-routes to the right agent(s)
* ✍️ Want precision? Just call @agent-name directly
* 👥 Complex task? Mention multiple agents for team execution

Examples:

"@security-auditor please review auth flow for risks"
"@cloud-architect + @devops-troubleshooter → design a resilient multi-region setup"
"@ai-engineer + @legal-advisor → build a privacy-safe RAG pipeline"


🔗 https://github.com/Parveshiiii/Dev-Stack-Agents
MIT License | Claude-Ready | PRs Welcome

ImranzamanML 
posted an update 4 months ago
view post
Post
1674
Hugging Face just made life easier with the new hf CLI!
huggingface-cli to hf

With renaming the CLI, there are new features added like hf jobs. We can now run any script or Docker image on dedicated Hugging Face infrastructure with a simple command. It's a good addition for running experiments and jobs on the fly.

To get started, just run:
pip install -U huggingface_hub

List of hf CLI Commands

Main Commands
hf auth: Manage authentication (login, logout, etc.).
hf cache: Manage the local cache directory.
hf download: Download files from the Hub.
hf jobs: Run and manage Jobs on the Hub.
hf repo: Manage repos on the Hub.
hf upload: Upload a file or a folder to the Hub.
hf version: Print information about the hf version.
hf env: Print information about the environment.

Authentication Subcommands (hf auth)
login: Log in using a Hugging Face token.
logout: Log out of your account.
whoami: See which account you are logged in as.
switch: Switch between different stored access tokens/profiles.
list: List all stored access tokens.

Jobs Subcommands (hf jobs)
run: Run a Job on Hugging Face infrastructure.
inspect: Display detailed information on one or more Jobs.
logs: Fetch the logs of a Job.
ps: List running Jobs.
cancel: Cancel a Job.

hashtag#HuggingFace hashtag#MachineLearning hashtag#AI hashtag#DeepLearning hashtag#MLTools hashtag#MLOps hashtag#OpenSource hashtag#Python hashtag#DataScience hashtag#DevTools hashtag#LLM hashtag#hfCLI hashtag#GenerativeAI
  • 1 reply
·