πΈ Vensa 1.1B (GGUF)
Vensa 1.1B is a lightweight, specialized AI model fine-tuned for Flutter Development and Dart Programming.
This GGUF version is optimized for high-performance inference on local machines and edge devices using llama.cpp. It balances a small footprint with deep knowledge of mobile app architecture and widget implementation.
π€ Developer Information
- Developer: Praveen Kumar
- Portfolio: praveen-dev.space
- Contact: praveenvenkat042k@gmail.com
- Expertise: Flutter Development, AI Engineering, and Server Configurations
π Model Details
- Architecture: Llama-based (1.1B Parameters)
- Format: GGUF (Optimized for CPU & Mobile)
- Quantization: Q4_K_M (High efficiency with minimal quality loss)
- Primary Focus: Dart syntax, Flutter widget trees, state management (Provider, Riverpod, Bloc), and mobile UI/UX patterns
π API Implementation
While the model weights are open-source (GGUF), the Vensa API is currently a private service used for specialized integrations.
If you are interested in collaboration or custom API access, please contact the developer directly.
π» How to Use (Python / llama-cpp-python)
To run this model locally, first install the dependencies:
pip install llama-cpp-python huggingface-hub
Then run the following Python script:
Python
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
# 1. Download the GGUF model
model_path = hf_hub_download(
repo_id="prawinn04/vensa-1.1b-gguf",
filename="vensa-1.1b.gguf"
)
# 2. Load the model
llm = Llama(
model_path=model_path,
n_ctx=2048,
n_threads=4 # Adjust based on your CPU cores
)
# 3. Run a Flutter-related prompt
output = llm(
"### Instruction:\nExplain how to use a ListView.builder in Flutter.\n\n### Response:",
max_tokens=512,
stop=["### Instruction:"]
)
print(output['choices'][0]['text'])
βοΈ License
This model is released under the MIT License.
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.