Git LFS (Large File Storage) replaces large files in your repositories with lightweight pointer files, while storing the actual content on Mesa’s servers. This is ideal for binary assets, datasets, media files, and other large files that don’t diff well.
When to use Git LFS
Use Git LFS for files that are:
- Large (typically > 1MB): Images, videos, audio, compiled binaries
- Binary: Files that don’t diff meaningfully (PDFs, PSDs, ZIPs)
- Datasets: ML training data, database dumps, fixtures
LFS keeps your repository fast to clone while still versioning large files alongside your code.
Setup
Install Git LFS
brew install git-lfs
git lfs install
# Debian/Ubuntu
sudo apt install git-lfs
git lfs install
# Or download from https://git-lfs.com
# With Chocolatey
choco install git-lfs
git lfs install
# Or download from https://git-lfs.com
Track files
Tell Git LFS which files to manage using git lfs track:
# Track all PNG files
git lfs track "*.png"
# Track all files in a directory
git lfs track "assets/**"
# Track specific large files
git lfs track "model.bin"
git lfs track "dataset.parquet"
This creates or updates a .gitattributes file:
*.png filter=lfs diff=lfs merge=lfs -text
assets/** filter=lfs diff=lfs merge=lfs -text
model.bin filter=lfs diff=lfs merge=lfs -text
Commit the .gitattributes file to your repository so all collaborators use the same LFS configuration.
Push and pull
LFS works transparently with standard Git commands:
# Add and commit as usual
git add model.bin
git commit -m "Add trained model"
# Push to Mesa - LFS files are uploaded automatically
git push origin main
When cloning or pulling, LFS files are downloaded automatically:
# Clone with LFS files
git clone https://t:${MESA_API_KEY}@depot.mesa.dev/acme/ml-models.git
# Pull latest, including LFS objects
git pull origin main
Verify LFS files
# List LFS-tracked files in your repo
git lfs ls-files
# Check which patterns are tracked
git lfs track
File locking
For binary files that can’t be merged, use LFS locks to prevent conflicts:
# Lock a file before editing
git lfs lock assets/hero-image.psd
# See all locks in the repo
git lfs locks
# Unlock when done
git lfs unlock assets/hero-image.psd
Locks are advisory - they inform collaborators that someone is editing a file. Mesa does not prevent pushing changes to locked files you don’t own, but the lock status is visible to all users.
Force unlock
If you need to take over a lock (e.g., the original locker is unavailable):
git lfs unlock assets/hero-image.psd --force
This requires git:write scope on your API key.
Authentication
LFS uses the same credentials as Git operations. Your Mesa API key works automatically:
# API key embedded in URL
git clone https://t:[email protected]/acme/my-repo.git
# Or using environment variable
git clone https://t:${MESA_API_KEY}@depot.mesa.dev/acme/my-repo.git
Required scopes
| Operation | Scope |
|---|
| Download LFS objects | git:read |
| Upload LFS objects | git:write |
| List locks | git:read |
| Create/delete locks | git:write |
Storage and quotas
LFS objects count toward your organization’s storage quota. The total repository size includes both Git objects and LFS objects:
Total Size = Git Objects + LFS Objects
Uploads that would exceed your quota are rejected. Check your organization’s storage usage in the dashboard before uploading large files.
Object limits
- Maximum file size: 5 GB per object
- Upload method: Direct to S3 with pre-signed URLs (efficient for large files)
Migrating existing files to LFS
To move existing large files to LFS:
# Install git-lfs-migrate if not included with git-lfs
git lfs migrate import --include="*.psd,*.bin" --everything
# This rewrites history - force push required
git push origin main --force
History rewriting affects all collaborators. Coordinate with your team before migrating.
REST API uploads (no clone required)
For automation, CI/CD, and agent workflows, you can upload large files directly via the REST API without cloning the repository. This is ideal for:
- AI agents deploying model weights or datasets
- Build pipelines uploading artifacts
- Automation scripts that don’t need a full checkout
Two-step flow
- Upload: Request a pre-signed URL and upload content to S3
- Commit: Create a commit with LFS file references
Client Mesa API S3
│ │ │
│ POST /lfs/objects │ │
│ { oid, size } │ │
│ ──────────────────────────────────>│ │
│ │ │
│ { upload_url, expires_in } │ │
│ <──────────────────────────────────│ │
│ │ │
│ PUT upload_url │ │
│ ───────────────────────────────────────────────────────────>│
│ │ │
│ POST /commits │ │
│ { files: [{ path, lfs: { oid, size } }] } │
│ ──────────────────────────────────>│ │
│ │ │
│ { sha: "abc123..." } │ │
│ <──────────────────────────────────│ │
Using the SDK helper
The TypeScript SDK provides an uploadLargeFiles helper that handles the entire flow:
import { Mesa } from "@mesadev/sdk";
import { uploadLargeFiles } from "@mesadev/sdk/helpers";
const mesa = new Mesa({ apiKey: process.env.MESA_API_KEY });
// Read file content
const modelData = await Bun.file("model.bin").arrayBuffer();
// Upload and commit in one call
const commit = await uploadLargeFiles(mesa, {
org: "my-org",
repo: "ml-models",
branch: "main",
message: "Deploy trained model v2.3",
author: {
name: "Deploy Bot",
email: "[email protected]",
},
files: [
{
path: "models/classifier.bin",
content: new Uint8Array(modelData),
},
],
});
console.log(`Committed: ${commit.sha}`);
The helper automatically:
- Computes SHA-256 hashes for each file
- Requests upload URLs from the LFS endpoint
- Uploads content directly to S3
- Creates a commit with LFS pointer files
Multiple files
You can upload multiple large files in a single commit:
const commit = await uploadLargeFiles(mesa, {
org: "my-org",
repo: "datasets",
branch: "main",
message: "Add training datasets",
author: {
name: "Data Pipeline",
email: "[email protected]",
},
files: [
{ path: "data/train.parquet", content: trainData },
{ path: "data/test.parquet", content: testData },
{ path: "data/validation.parquet", content: valData },
],
});
Manual API calls
If you need lower-level control, you can call the API directly:
import { createHash } from "crypto";
// 1. Compute SHA-256 hash of your content
const content = await Bun.file("large-file.bin").arrayBuffer();
const hash = createHash("sha256")
.update(new Uint8Array(content))
.digest("hex");
// 2. Request upload URL
const { result } = await mesa.lfs.upload({
org: "my-org",
repo: "my-repo",
body: {
objects: [{ oid: hash, size: content.byteLength }],
},
});
// 3. Upload to S3
const obj = result.objects[0];
if (obj.upload_url) {
await fetch(obj.upload_url, {
method: "PUT",
body: content,
headers: { "Content-Type": "application/octet-stream" },
});
}
// 4. Create commit with LFS reference
const commit = await mesa.commits.create({
org: "my-org",
repo: "my-repo",
body: {
branch: "main",
message: "Add large file",
author: { name: "Bot", email: "[email protected]" },
files: [
{
path: "data/large-file.bin",
lfs: { oid: hash, size: content.byteLength },
},
],
},
});
Rust SDK
The Rust SDK provides the same functionality:
use mesa_dev::{Mesa, MesaError};
use mesa_dev::helpers::{upload_large_files, LargeFile, UploadLargeFilesOptions};
use mesa_dev::models::Author;
#[tokio::main]
async fn main() -> Result<(), MesaError> {
let client = Mesa::new("my-api-key");
let content = std::fs::read("model.bin")?;
let commit = upload_large_files(
&client,
UploadLargeFilesOptions {
org: "my-org".into(),
repo: "ml-models".into(),
branch: "main".into(),
message: "Deploy model".into(),
author: Author {
name: "Deploy Bot".into(),
email: "[email protected]".into(),
date: None,
},
files: vec![LargeFile {
path: "models/classifier.bin".into(),
content,
}],
base_sha: None,
},
).await?;
println!("Committed: {}", commit.sha);
Ok(())
}
API reference
| Endpoint | Description |
|---|
POST /api/v1/{org}/{repo}/lfs/objects | Request pre-signed upload URLs |
POST /api/v1/{org}/{repo}/lfs/objects/download | Request pre-signed download URLs |
Request body:
{
"objects": [
{ "oid": "sha256-hash-64-hex-chars", "size": 12345678 }
]
}
Response:
{
"objects": [
{
"oid": "abc123...",
"size": 12345678,
"upload_url": "https://s3.amazonaws.com/...",
"expires_in": 3600,
"exists": false
}
]
}
If the object already exists, exists will be true and no upload_url will be provided—you can skip the upload and proceed directly to the commit.
CI/CD usage
LFS works in CI pipelines with the same authentication:
name: Build with LFS assets
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Install Git LFS
run: git lfs install
- name: Checkout with LFS
run: |
git clone https://t:${{ secrets.MESA_API_KEY }}@depot.mesa.dev/acme/my-repo.git
cd my-repo
git lfs pull
Use git lfs pull to explicitly download LFS objects after cloning. Some CI environments defer LFS downloads by default.
Troubleshooting
LFS objects not downloading
If you see pointer files instead of actual content:
# Fetch all LFS objects for current branch
git lfs pull
# Fetch LFS objects for all branches
git lfs fetch --all
Authentication errors
Ensure your API key has the required scope:
git:read for downloads
git:write for uploads and locks
# Test your credentials
git lfs env
Quota exceeded
If uploads fail with a quota error:
- Check your organization’s storage usage in the dashboard
- Remove unused LFS objects or request a quota increase
- Consider using
.lfsconfig to exclude large files from certain branches