facebook/babi_qa
Viewer • Updated • 10.4k • 1.79k • 11
How to use p208p2002/gpt2-medium-babi with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="p208p2002/gpt2-medium-babi") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("p208p2002/gpt2-medium-babi")
model = AutoModelForCausalLM.from_pretrained("p208p2002/gpt2-medium-babi")How to use p208p2002/gpt2-medium-babi with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "p208p2002/gpt2-medium-babi"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "p208p2002/gpt2-medium-babi",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/p208p2002/gpt2-medium-babi
How to use p208p2002/gpt2-medium-babi with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "p208p2002/gpt2-medium-babi" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "p208p2002/gpt2-medium-babi",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "p208p2002/gpt2-medium-babi" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "p208p2002/gpt2-medium-babi",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use p208p2002/gpt2-medium-babi with Docker Model Runner:
docker model run hf.co/p208p2002/gpt2-medium-babi
Fine tune and evaluate transformer model on facebook's bAbi tasks.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Training Code: p208p2002/bAbi-tasks-with-transformer-model
| task_no | task_name | score |
|---|---|---|
| qa1 | single-supporting-fact | 100 |
| qa2 | two-supporting-facts | 99.4 |
| qa3 | three-supporting-facts | 62.0 |
| qa4 | two-arg-relations | 100 |
| qa5 | three-arg-relations | 96.7 |
| qa6 | yes-no-questions | 100 |
| qa7 | counting | 100 |
| qa8 | lists-sets | 97.7 |
| qa9 | simple-negation | 100 |
| qa10 | indefinite-knowledge | 100 |
| qa11 | basic-coreference | 100 |
| qa12 | conjunction | 100 |
| qa13 | compound-coreference | 100 |
| qa14 | time-reasoning | 100 |
| qa15 | basic-deduction | 100 |
| qa16 | basic-induction | 100 |
| qa17 | positional-reasoning | 100 |
| qa18 | size-reasoning | 100 |
| qa19 | path-finding | 100 |
| qa20 | agents-motivations | 100 |
# Please use with the follow template
INPUT_TEMPLATE = """
Context:
{context}
Question:
{question}
Answer:
{answer}
"""
input_text = INPUT_TEMPLATE.format_map({
"context":context,
"question":question,
"answer":answer
}).strip()