Text Classification
Transformers
Safetensors
ONNX
English
distilbert
ai-detection
education
text-embeddings-inference
Instructions to use darwinkernelpanic/ai-detector-pgx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use darwinkernelpanic/ai-detector-pgx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="darwinkernelpanic/ai-detector-pgx")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("darwinkernelpanic/ai-detector-pgx") model = AutoModelForSequenceClassification.from_pretrained("darwinkernelpanic/ai-detector-pgx") - Notebooks
- Google Colab
- Kaggle
| language: en | |
| license: mit | |
| library_name: transformers | |
| tags: | |
| - ai-detection | |
| - text-classification | |
| - onnx | |
| - education | |
| # AI Detector PGX | |
| BERT-based classifier for detecting AI-generated text in student essays. Trained on PG assignments. | |
| ## Quick Start | |
| ### Python | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification | |
| import torch | |
| model_id = "darwinkernelpanic/ai-detector-pgx" | |
| tokenizer = AutoTokenizer.from_pretrained(model_id) | |
| model = AutoModelForSequenceClassification.from_pretrained(model_id) | |
| text = "The mitochondria is the powerhouse of the cell..." | |
| inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512) | |
| with torch.no_grad(): | |
| outputs = model(**inputs) | |
| probs = torch.softmax(outputs.logits, dim=1) | |
| ai_prob = probs[0][1].item() | |
| print(f"AI Probability: {ai_prob:.2%}") | |
| ``` | |
| ### JavaScript (ONNX) | |
| ```javascript | |
| import * as ort from 'onnxruntime-web'; | |
| const session = await ort.InferenceSession.create('model.onnx'); | |
| // Tokenize with @xenova/transformers, then run inference | |
| const results = await session.run({ input_ids, attention_mask }); | |
| const logits = results.logits.data; | |
| const aiProb = Math.exp(logits[1]) / (Math.exp(logits[0]) + Math.exp(logits[1])); | |
| ``` | |
| ## Model Details | |
| - **Base:** prajjwal1/bert-tiny (4.4M params) | |
| - **Classes:** human (0), ai (1) | |
| - **Sequence length:** 512 tokens | |
| - **ONNX size:** 255MB | |
| ## Limitations | |
| Trained on academic essays — may not generalize to all text types. | |