Git was designed for source code — small text files that compress well and diff cleanly. But modern development involves binary artifacts: compiled binaries, machine learning models, design assets, video files, and datasets. Committing these directly to Git bloats your repository and degrades performance.
Git Large File Storage (LFS) solves this by replacing large files with lightweight pointer files in your repository while storing the actual content on a separate server. The challenge: where do you host that LFS server? Public services like GitHub impose storage limits and bandwidth quotas. For teams with large binary assets, self-hosting your LFS server is the most cost-effective and privacy-preserving option.
This guide compares three leading open-source platforms with built-in Git LFS support: gitea, Forgejo, and GitLab CE. We cover installation, storage backends, docker Compose configurations, and migration strategies so you can choose the right LFS solution for your infrastructure.
Why Self-Host Git LFS?
Running your own LFS server instead of relying on GitHub or GitLab.com offers several advantages:
- No storage limits. GitHub’s free tier caps LFS at 1 GB. Self-hosted storage is limited only by your disk space.
- No bandwidth charges. Every
git lfs pullandgit lfs pushconsumes bandwidth. On self-hosted infrastructure, this traffic stays on your network. - Data sovereignty. Binary assets — proprietary datasets, compiled firmware, design files — never leave your infrastructure.
- Cost predictability. GitHub charges $5/month per 50 GB of LFS storage and $5/month per 50 GB of bandwidth. Self-hosting on a single server with a 2 TB drive costs a fraction of that.
- Faster clones. When the LFS server is on the same network as your CI runners and developer workstations, large file downloads are significantly faster.
- Custom retention policies. Set your own rules for how long LFS objects are kept, when unused files are pruned, and who can access them.
For teams working with game assets, ML models, CAD files, or any binary-heavy project, self-hosted LFS pays for itself quickly.
Project Overview and Live Stats
Here’s how the three platforms compare as of April 2026, based on live GitHub data:
| Feature | Gitea | Forgejo | GitLab CE |
|---|---|---|---|
| GitHub Stars | 54,998 | N/A (Codeberg-hosted) | 24,311 |
| Last Updated | 2026-04-20 | Active (Codeberg) | 2026-04-20 |
| Language | Go | Go | Ruby |
| LFS Protocol | Native LFS API | Native LFS API | Native LFS API |
| Storagminiokends | Local disk, MinIO/S3 | Local disk, MinIO/S3 | Local disk, S3, GCS |
| LFS Locking | Yes | Yes | Yes |
| LFS Object Pruning | Manual (admin API) | Manual (admin API) | Built-in admin UI |
| Docker Image | gitea/gitea | codeberg/forgejo | gitlab/gitlab-ce |
| RAM (minimum) | 512 MB | 512 MB | 4 GB |
| Best For | Small teams, homelabs | Community-driven projects | Enterprise, large teams |
Gitea and Forgejo share a common codebase (Forgejo is a hard fork of Gitea created in 2022), so their LFS implementations are nearly identical. GitLab CE takes a different architectural approach with a more comprehensive — but heavier — LFS system.
Option 1: Gitea — Lightweight Git LFS Server
Gitea is the most popular lightweight self-hosted Git platform. Its LFS implementation is straightforward: configure a storage backend, enable LFS in the config, and it works.
Architecture
Gitea stores LFS objects in one of two ways:
- Local filesystem — objects stored under
[LFS].PATHin a directory structure organized by OID - S3-compatible storage — MinIO, AWS S3, Cloudflare R2, or any S3-compatible endpoint
Pointer files (.gitattributes entries) reference LFS objects by their SHA-256 hash (OID). When a developer pushes, Gitea receives the LFS objects, validates the OID, and stores them in the configured backend. On clone or pull, Gitea serves the objects back via its built-in LFS HTTP API.
Docker Compose Setup
This configuration deploys Gitea with PostgreSQL and MinIO as the LFS storage backend:
| |
Key configuration notes:
GITEA__lfs__SERVE_DIRECT=truetells Gitea to generate pre-signed S3 URLs so clients download LFS objects directly from MinIO, bypassing the Gitea proxy. This is essential for performance with large files.GITEA__lfs__MINIO_USE_SSL=falseis correct for internal Docker networks. Set totrueif MinIO is behind TLS termination.- The MinIO bucket
gitea-lfsis created automatically on first use.
Enabling LFS on a Repository
After deployment, LFS is enabled globally in Gitea’s admin settings. Individual repositories must also opt in:
| |
Option 2: Forgejo — Community-Driven LFS Fork
Forgejo is a community-driven hard fork of Gitea, created in response to Gitea Ltd’s commercialization decisions. Since LFS is a core Git feature rather than a commercial add-on, Forgejo’s LFS implementation closely mirrors Gitea’s — with a few enhancements.
Forgejo-Specific LFS Enhancements
- Active community governance. Forgejo’s development is steered by a community assembly, not a single company. LFS feature requests are prioritized based on community voting.
- Compatibility guarantees. Forgejo maintains API compatibility with Gitea, so existing Gitea LFS clients work without modification.
- Faster release cadence. Forgejo has maintained a consistent release schedule with security patches and feature updates.
Docker Compose Setup
Forgejo’s deployment is nearly identical to Gitea — just swap the image and adjust the domain:
| |
The environment variable prefix changes from GITEA__ to FORGEJO__, but the LFS-specific keys remain identical. This makes migration between Gitea and Forgejo straightforward.
Option 3: GitLab CE — Enterprise-Grade LFS
GitLab Community Edition offers the most feature-complete LFS implementation of the three. Its LFS system integrates with GitLab’s CI/CD, package registry, and object storage framework.
GitLab LFS Architecture
GitLab’s LFS system stores objects in configurable object storage and tracks metadata in PostgreSQL. Key features that distinguish it from Gitea/Forgejo:
- LFS object storage per-project. LFS objects can be routed to different storage backends based on project settings.
- Built-in LFS object administration. The admin UI shows LFS object counts, storage usage per-project, and provides cleanup tools.
- LFS batch API. GitLab’s LFS server supports the batch transfer API, allowing clients to request multiple objects in a single HTTP call.
- CI/CD LFS integration. GitLab CI runners automatically handle LFS objects during pipeline execution without additional configuration.
- LFS file locking. Developers can lock LFS-tracked files to prevent merge conflicts on binary assets.
Docker Compose Setup (Omnibus)
GitLab CE uses the Omnibus package, which bundles all components into a single container. The official Docker image handles LFS configuration through gitlab.rb:
| |
Important notes:
path_style: trueis required for MinIO (S3 path-style addressing vs virtual-hosted style).lfs_object_store_proxy_download: falsemeans LFS objects are served directly from S3. Set totrueif GitLab should proxy downloads (useful when S3 is not publicly accessible).- GitLab CE requires at least 4 GB RAM and benefits from 8 GB+. The Omnibus package bundles PostgreSQL, Redis, Puma, Sidekiq, and other services.
Comparison: LFS Capabilities Side by Side
| Capability | Gitea | Forgejo | GitLab CE |
|---|---|---|---|
| LFS enabled by default | Yes (config flag) | Yes (config flag) | Yes |
| S3/MinIO backend | Yes | Yes | Yes |
| Direct S3 downloads | Yes (SERVE_DIRECT) | Yes (SERVE_DIRECT) | Yes (proxy_download: false) |
| LFS file locking | Yes | Yes | Yes |
| LFS object admin UI | Basic (admin panel) | Basic (admin panel) | Full (storage analytics) |
| LFS object pruning | Admin API only | Admin API only | Admin UI + scheduled jobs |
| LFS batch API | Yes | Yes | Yes |
| LFS transfer quota | No | No | Per-group limits |
| LFS audit logging | Basic | Basic | Comprehensive |
| LFS migration tool | Manual | Manual | Built-in (import from GitHub) |
| Resource requirements | Low (512 MB) | Low (512 MB) | High (4 GB+) |
LFS Storage Backend Comparison: Local vs S3
Regardless of which platform you choose, you need to decide where LFS objects are stored:
Local Filesystem
- Pros: Simple setup, no additional infrastructure, fastest for small deployments
- Cons: No horizontal scaling, harder to back up, single point of failure
- Best for: Homelabs, single-server deployments, teams under 10 users
| |
S3-Compatible Object Storage (MinIO, R2, S3)
- Pros: Horizontally scalable, built-in redundancy, easy backup/replication, works with CDN
- Cons: Additional infrastructure to manage, network latency for small files
- Best for: Production deployments, teams with large binary assets, multi-server setups
Migrating LFS Objects Between Platforms
If you’re moving from GitHub or between self-hosted platforms, here’s the migration workflow:
Step 1: Clone with LFS objects
| |
Step 2: Push to the new server
| |
Step 3: Verify LFS integrity
| |
Performance Tuning for Large LFS Repositories
For repositories with thousands of LFS objects or multi-gigabyte files:
Use direct S3 serving. Both
SERVE_DIRECT=true(Gitea/Forgejo) andproxy_download: false(GitLab) bypass the application server for downloads, dramatically improving throughput.Configure connection pooling. PostgreSQL connection limits should be set to accommodate concurrent LFS transfers:
1 2 3# Gitea/Forgejo app.ini [database] MAX_OPEN_CONNS = 100Set appropriate timeouts. Large file uploads can take minutes. Configure your reverse proxy accordingly:
1 2 3 4# Nginx configuration for LFS uploads client_max_body_size 0; # unlimited proxy_read_timeout 600s; proxy_send_timeout 600s;Enable LFS object caching. Place a CDN or reverse proxy cache in front of your LFS endpoint for frequently-downloaded objects.
Regular cleanup. Prune unreachable LFS objects periodically:
1 2# In your repository git lfs prune --verbose
Which Should You Choose?
Choose Gitea if: You want a lightweight, battle-tested platform with minimal resource requirements. Gitea’s LFS implementation is simple, reliable, and well-documented. It runs comfortably on a Raspberry Pi 4 and handles thousands of repositories without issue. For related CI/CD setup, see our Woodpecker CI vs Drone CI vs Gitea Actions guide which covers integrating pipelines with Gitea.
Choose Forgejo if: You want Gitea’s functionality but prefer community governance over corporate control. Forgejo’s LFS is API-compatible with Gitea, making it a drop-in replacement for existing deployments. If you’re also managing GitOps workflows, our ArgoCD vs Flux guide covers deployment strategies that complement your version control infrastructure.
Choose GitLab CE if: You need enterprise features like per-project LFS quotas, comprehensive audit logging, built-in migration tools, and tight CI/CD integration. The trade-off is significantly higher resource consumption — plan for at least 4 GB RAM and a multi-core CPU.
For teams concerned about keeping binary secrets out of version control, our secrets scanning guide covers complementary tools to ensure LFS-stacked binaries don’t accidentally contain credentials.
FAQ
What is Git LFS and why can’t I just commit large files directly to Git?
Git LFS (Large File Storage) replaces large files in your repository with lightweight text pointers. The actual file content is stored on a separate LFS server and downloaded on demand. Committing large binary files directly to Git causes your repository to grow unboundedly — every clone downloads the entire history of every binary file. LFS keeps repository clones fast while still versioning your binary assets.
How much disk space do I need for a self-hosted LFS server?
It depends on your project. A single ML model can be 1-10 GB. Game asset repositories often exceed 50 GB. As a rule of thumb, plan for 3x your current binary asset size: 1x for the current objects, 1x for historical versions (LFS keeps old objects even after git lfs prune on the client), and 1x for growth buffer. Start with a 500 GB drive and expand as needed.
Can I use Cloudflare R2 instead of MinIO for LFS storage?
Yes. R2 is S3-compatible and works as an LFS backend for all three platforms. For Gitea and Forgejo, set STORAGE_TYPE=minio and point MINIO_ENDPOINT to your R2 endpoint URL. For GitLab CE, use the same S3 connection configuration with R2’s endpoint. R2 offers free egress bandwidth, making it cost-effective for teams with frequent LFS downloads.
How do I restrict who can push LFS objects?
All three platforms tie LFS push permissions to repository access controls. If a user can push to a repository, they can push LFS objects to it. For finer-grained control:
- Gitea/Forgejo: Use branch protection rules and team permissions to limit who can push to specific branches.
- GitLab CE: Use Protected Branches and Protected Tags settings, or configure LFS transfer quotas per-group.
- All platforms: Set up pre-receive hooks or webhooks to validate LFS uploads (e.g., reject files over a certain size or of certain MIME types).
Can I migrate from GitHub LFS to a self-hosted server without losing history?
Yes. The git clone --mirror approach followed by git push --mirror transfers both Git objects and LFS objects. The key step is running git lfs fetch --all on the cloned mirror before pushing, which downloads all historical LFS objects from GitHub. Then git push --mirror uploads them to your new server. Verify with git lfs fsck --all after migration.
Do I need a separate server for the LFS storage backend?
No. For small to medium deployments, running MinIO on the same server as Gitea, Forgejo, or GitLab works fine. For production environments with heavy LFS traffic, separating the object storage onto dedicated hardware (or a separate VM/container) improves performance and makes backup strategies simpler.
What happens if the LFS server goes down?
Developers can still clone repositories (the pointer files are in Git), but they won’t be able to check out the actual large files — git lfs smudge will fail. Existing clones with cached LFS objects continue to work. This is why running the LFS server on reliable hardware with proper monitoring is important.