BachGround Text to Piano
This Hugging Face repository contains the public model artifacts for the BachGround text-to-piano pipeline.
The pipeline has two stages:
- a Llama 3.1 based adapter that generates base symbolic piano tokens from text
- a complementary transformer that predicts duration and velocity tokens to enrich the base sequence before MIDI rendering
Repository Layout
models/t2p_baseLlama-based adapter files and tokenizer assetsmodels/complementary_transformerComplementary transformer code and trained duration/velocity modelsscripts/infer_music_full_pipeline.pyEnd-to-end prompt -> base tokens -> enrichment -> MIDI pipelinescripts/infer_llama_music.pyOptional standalone base-model inferencescripts/midi_to_mp3.pyOptional MIDI to MP3 rendering utility
Usage
Use the end-to-end pipeline script for the normal inference path:
python3 scripts/infer_music_full_pipeline.py \
--llama-model-dir models/t2p_base \
--duration-model-dir models/complementary_transformer/models/duration \
--velocity-model-dir models/complementary_transformer/models/velocity \
--prompt "A calm piano melody in C major" \
--output-dir pipeline_out \
--do-sample \
--temperature 0.9 \
--top-p 0.9 \
--max-new-tokens 450
License Breakdown
This repository uses the llama3.1 Hugging Face license tag because it contains a Llama 3.1-derived adapter.
Artifact-level intent:
models/t2p_base/*Subject to the applicable Llama 3.1 license termsmodels/complementary_transformer/*Intended to be released under separate terms; see the component README before public release
Links
- Project GitHub page:
https://github.com/BachGround/t2p - Huggingface repo:
https://huggingface.co/umutgur/t2p - Website:
https://www.bachground.com
Notes
- This repository is designed for inference and artifact distribution.
- Training datasets are not included.
- If you later decide to separate the weights into multiple Hugging Face repositories, keep this README as the umbrella project card and link out to the split repos.
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support