Text Generation
Transformers
PyTorch
llama
text-generation-inference
How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mtgv/MobileLLaMA-1.4B-Base"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "mtgv/MobileLLaMA-1.4B-Base",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/mtgv/MobileLLaMA-1.4B-Base
Quick Links

Model Summery

MobileLLaMA-1.4B-Base is a Transformer with 1.4B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the RedPajama v1 dataset only. This benefits further research by enabling controlled experiments.

We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA 1.4B is on par with the most recent opensource models.

Model Sources

How to Get Started with the Model

Model weights can be loaded with Hugging Face Transformers. Examples can be found at Github.

Training Details

please refer to our paper in section 4.1: MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices.

Downloads last month
2,169
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for mtgv/MobileLLaMA-1.4B-Base

Quantizations
2 models

Dataset used to train mtgv/MobileLLaMA-1.4B-Base

Space using mtgv/MobileLLaMA-1.4B-Base 1

Paper for mtgv/MobileLLaMA-1.4B-Base