bert-mini is a fine-tuned, quantized derivative of Google's BERT-base-uncased, optimized for real-time NLP on edge devices, IoT systems, and mobile applications. With a compact size of ~15MB and ~8M parameters, it supports a wide range of tasks, including question answering, intent classification, sentiment analysis, named entity recognition (NER), multi-class classification, open-domain classification, semantic similarity, and token classification. Ideal for privacy-first, low-latency applications, bert-mini brings BERT’s power to resource-constrained environments.
bert-mini redefines edge AI with BERT’s advanced NLP, tailored for efficiency and versatility.
Extract precise answers from text, perfect for offline assistants in smart devices. For example, given a paragraph about a historical event, bert-mini can answer questions like “Who was involved?” or “When did it happen?”
from transformers import pipeline
# Initialize QA pipeline
qa_pipeline = pipeline("question-answering", model="boltuix/bert-mini")
# Example
context = "In 1969, Neil Armstrong became the first human to walk on the moon."
question = "Who was the first human to walk on the moon?"
result = qa_pipeline(question=question, context=context)
print(result["answer"]) # Output: Neil Armstrong
Classify user intents for IoT or chatbots, e.g., detecting commands like “Play music.”
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model
model_name = "boltuix/bert-mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# Example
text = "Play some music"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
labels = ["Play", "Stop", "Pause"]
print(f"Predicted intent: {labels[pred]}") # Output: Play
Detect positive/negative sentiment, useful for feedback apps.
from transformers import pipeline
# Initialize sentiment pipeline
sentiment_pipeline = pipeline("sentiment-analysis", model="boltuix/bert-mini")
# Example
text = "I love this new smartwatch!"
result = sentiment_pipeline(text)
print(result) # Output: [{'label': 'POSITIVE', 'score': 0.95}]
Categorize queries with multiple labels, e.g., travel intents.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model
model_name = "boltuix/bert-mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=4)
model.eval()
# Example
text = "Book a flight to Paris"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
labels = ["Book", "Cancel", "Check", "Modify"]
print(f"Predicted class: {labels[pred]}") # Output: Book
Fine-tune for dynamic label sets, e.g., clustering customer support queries.
Identify entities like names or locations.
from transformers import pipeline
# Initialize NER pipeline
ner_pipeline = pipeline("ner", model="boltuix/bert-mini")
# Example
text = "Elon Musk visited Paris"
result = ner_pipeline(text)
print(result) # Output: [{'entity': 'PERSON', 'word': 'Elon Musk'}, {'entity': 'LOCATION', 'word': 'Paris'}]
Measure text similarity for clustering or search on edge devices.
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# Load model
model_name = "boltuix/bert-mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
model.eval()
# Example texts
text1 = "I want to book a flight"
text2 = "Reserve a plane ticket"
inputs1 = tokenizer(text1, return_tensors="pt", padding=True, truncation=True)
inputs2 = tokenizer(text2, return_tensors="pt", padding=True, truncation=True)
# Get embeddings
with torch.no_grad():
outputs1 = model(**inputs1).last_hidden_state.mean(dim=1)
outputs2 = model(**inputs2).last_hidden_state.mean(dim=1)
similarity = F.cosine_similarity(outputs1, outputs2).item()
print(f"Similarity score: {similarity:.4f}") # Output: e.g., 0.8923
Beyond NER, classify tokens for tasks like part-of-speech tagging.
from transformers import pipeline
# Initialize token classification pipeline
token_pipeline = pipeline("token-classification", model="boltuix/bert-mini")
# Example
text = "The quick brown fox jumps"
result = token_pipeline(text)
print(result) # Output: [{'entity': 'DET', 'word': 'The'}, {'entity': 'ADJ', 'word': 'quick'}, ...]
Predict missing words in sentences.
from transformers import pipeline
# Initialize pipeline
mlm_pipeline = pipeline("fill-mask", model="boltuix/bert-mini")
# Test example
result = mlm_pipeline("The train arrived at the [MASK] on time.")
print(result[0]["sequence"]) # Output: The train arrived at the station on time.
pip install transformers torch datasets
Requires Python 3.6+ and ~15MB storage.
Fine-tune for tasks like QA or multi-class classification:
# Install datasets
!pip install datasets
import torch
from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
import pandas as pd
# Prepare dataset
data = {
"text": ["Book a flight", "Cancel my ticket", "Check flight status", "Modify booking"],
"label": [0, 1, 2, 3] # Multi-class labels
}
df = pd.DataFrame(data)
dataset = Dataset.from_pandas(df)
# Load tokenizer and model
model_name = "boltuix/bert-mini"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=4)
# Tokenize dataset
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=64, return_tensors="pt")
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./bert_mini_results",
num_train_epochs=5,
per_device_train_batch_size=2,
logging_dir="./bert_mini_logs",
logging_steps=10,
save_steps=100,
eval_strategy="no",
learning_rate=3e-5,
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Fine-tune
trainer.train()
# Save model
model.save_pretrained("./fine_tuned_bert_mini")
tokenizer.save_pretrained("./fine_tuned_bert_mini")
Deploy using ONNX or TensorFlow Lite for edge devices.
bert-mini achieved a 5/5 pass rate in MLM tests.
Sentence | Expected Word |
---|---|
She wore a beautiful [MASK] to the party. | dress |
Mount Everest is the [MASK] mountain in the world. | highest |
The [MASK] barked loudly at the stranger. | dog |
He used a [MASK] to hammer the nail. | hammer |
The train arrived at the [MASK] on time. | station |
Model | Parameters | Size | Edge/IoT Focus | Tasks |
---|---|---|---|---|
bert-mini | ~8M | ~15MB | High | MLM, QA, NER, Classification, Similarity |
NeuroBERT-Mini | ~10M | ~35MB | High | MLM, NER, Classification |
DistilBERT | ~66M | ~200MB | Moderate | MLM, QA, NER, Classification |
TinyBERT | ~14M | ~50MB | Moderate | MLM, Classification |
MIT License: Free to use. See LICENSE.
bert-mini brings BERT’s full NLP power to the edge, enabling real-time, offline AI for IoT, mobile, and smart devices. From QA to semantic similarity, it’s your go-to model for 2025. Explore it on Hugging Face!