The Llama Community Licence: What It Actually Means
The Meta Llama Community Licence Agreement governs the use, reproduction, distribution, and modification of Meta's Llama model weights. Unlike traditional enterprise software licences that charge per user, per processor, or per consumption metric, the Llama Community Licence is a royalty-free grant for both research and commercial use. For enterprises below certain thresholds, it functions as a free commercial licence with specific obligations.
The licence is not the same as an open-source licence in the technical sense. Meta retains intellectual property in the model weights, and the licence restricts specific uses rather than granting unlimited rights. The distinction matters for legal and compliance teams: Llama is "open-weights" (the model parameters are publicly accessible), but it is not open-source in the GNU/OSI definition. The open-weights characteristic means you can download, run, fine-tune, and distribute Llama-based products — but you must do so within the terms of the community licence, which cannot be waived or modified unilaterally.
Each version of Llama — Llama 2, Llama 3, Llama 3.1, Llama 3.3, Llama 4 — has its own version-specific licence agreement. The terms evolve between versions, reflecting Meta's learning from legal challenges to earlier versions and its strategic positioning of Llama within the competitive AI market. An organisation that reviewed the Llama 3 licence and approved enterprise deployment cannot assume that Llama 4 carries identical terms. Version-specific review is a mandatory governance step before each new Llama version deployment.
Key Licence Obligations for Enterprise Deployments
Three categories of obligation from the Llama Community Licence require specific attention in enterprise deployments: the commercial use threshold, attribution requirements, and the acceptable use policy.
The 700 Million Monthly Active User Threshold
The most discussed restriction in the Llama Community Licence is the provision that organisations whose products or services reach 700 million monthly active users (MAU) must obtain a separate commercial licence from Meta before using Llama. This restriction exists to prevent hyperscale consumer internet platforms — Google, Apple, Amazon, Microsoft, Tencent, and similar — from using Meta's open-weights models to compete directly with Meta's own AI products without a commercial negotiation.
For the vast majority of enterprise organisations, the 700 million MAU threshold is entirely irrelevant: no enterprise internal tool, B2B platform, or sector-specific application approaches this scale. However, certain large consumer-facing platform businesses — particularly in media, e-commerce, financial services consumer platforms, and telecommunications — should assess whether their combined user base across all digital products approaches this threshold before deploying Llama at platform level. If there is any uncertainty, a conservative legal assessment and proactive contact with Meta's licensing team is the appropriate course of action.
Attribution Requirements
Products and services built using Llama must include a clear notice that the product is "Built with Meta Llama." The specific format and placement requirements are defined in the licence agreement and vary between versions. The attribution obligation applies to commercial products distributed to third parties — internal enterprise tools used exclusively within the organisation have different (typically lighter) attribution obligations, but the licence should be reviewed for the specific version in use.
Attribution obligations are frequently overlooked by enterprise development teams who are focused on functionality rather than legal compliance. Building the attribution check into the product approval process — before code is promoted to production — is more reliable than attempting to retrofit attribution into deployed products. Legal and compliance teams should include attribution verification as a standard item in AI product deployment checklists.
Acceptable Use Policy
The Llama Community Licence incorporates an Acceptable Use Policy (AUP) that prohibits specific categories of use. These prohibitions cover applications that could harm individuals or groups — including weapons development, generation of child sexual abuse material, and other high-risk applications — as well as competitive intelligence activities designed to circumvent Meta's business interests. The AUP evolves between licence versions and should be reviewed for each new Llama deployment.
For most enterprise applications — internal knowledge management, customer service automation, document processing, code assistance, analytics — the AUP creates no practical constraint. The prohibited categories are extreme use cases that responsible enterprises would avoid regardless of licence terms. However, organisations deploying Llama in research contexts, security applications, or sectors with inherently sensitive subject matter (defence, healthcare, financial crime investigation) should conduct an explicit AUP review to confirm that their specific use cases fall within the permitted range.
Evaluating Meta Llama or other open-weights models for enterprise deployment?
We provide independent GenAI licence review and procurement advisory — covering Llama, Mistral, and other open-weights models.Fine-Tuning and Derivative Works
One of the most commercially valuable aspects of the Llama Community Licence is the permission to fine-tune Llama on proprietary datasets and distribute the resulting fine-tuned model. This capability — unavailable for closed-weights models like GPT-4 — enables organisations to create domain-specific AI capabilities tailored to their data and use cases, without sharing proprietary data with the base model provider.
Fine-tuned Llama models and other derivative works must be distributed under the Llama Community Licence or a compatible licence — meaning recipients of your fine-tuned model receive it under the same terms Meta grants to you. This "licence inheritance" requirement means that fine-tuned Llama models cannot be distributed under proprietary licences that restrict the recipient's right to inspect, modify, or redistribute the model weights. Organisations that wish to commercialise fine-tuned Llama models as proprietary products should obtain legal advice on the compatibility of their intended distribution terms with the Llama Community Licence before distribution.
It is important to distinguish between using a fine-tuned Llama model to power a product or service (permitted — the product is your IP, powered by Llama infrastructure) and distributing the fine-tuned model weights to third parties (subject to the licence inheritance requirement). Most enterprise deployments fall in the first category and do not trigger the distribution obligations.
How Llama Licensing Compares to OpenAI
Comparing the Llama Community Licence to OpenAI's usage policies illustrates the different commercial philosophies of the two leading AI providers. OpenAI's models are closed-weights — you access them exclusively through OpenAI's API under OpenAI's terms of service, which govern permitted uses, data handling, and content restrictions. You cannot inspect the model weights, fine-tune on arbitrary data (fine-tuning is available for some models through OpenAI's pipeline but with data volume and usage restrictions), or deploy OpenAI models in your own infrastructure. OpenAI's enterprise agreements include lock-in provisions that reduce portability and reinforce vendor dependency — enterprise teams evaluating OpenAI commitments should scrutinise auto-renewal terms, minimum spend commitments, and data portability rights before signing.
Llama's open-weights architecture provides direct countermeasures to each of these constraints: you own the deployed model infrastructure, fine-tuning uses your data on your systems without intermediary data exposure, and portability is maximal — your fine-tuned weights are not locked to any specific inference provider. For enterprises with strong data sovereignty requirements, long-term AI strategies, or cost sensitivity at scale, Llama's open-weights model is structurally more favourable than closed-weights API dependency.
The trade-off is operational responsibility. OpenAI (and Azure OpenAI) manage model serving, safety infrastructure, and API reliability as a managed service. Llama self-deployment transfers those responsibilities to the deploying organisation. For enterprises with limited AI engineering capacity, the managed service model of OpenAI or Azure OpenAI may offer a lower total cost of operational complexity even if the token cost is higher.
Practical Compliance Steps for Enterprise Llama Deployment
Translating the licence review into a deployable compliance posture requires four practical steps: legal sign-off, attribution implementation, acceptable use verification, and version governance.
Legal sign-off: The applicable Llama Community Licence Agreement for the specific version being deployed should receive formal legal review before production go-live. This review should confirm: the MAU threshold does not apply to your deployment context, the specific use case is within the acceptable use policy, attribution requirements are understood and implemented, and derivative work distribution plans (if any) comply with licence inheritance requirements. This review should be documented and retained as evidence of due diligence.
Attribution implementation: Define the specific attribution format required by the licence version and implement it in product UI, documentation, and marketing materials as applicable. Assign a named owner (typically within legal or product compliance) responsible for ensuring attribution is present in all released versions of the product.
Acceptable use verification: Document the specific use case being deployed, confirm its alignment with the AUP, and flag any edge cases for legal review before deployment. For products with broad end-user access, consider whether end-user terms of service should reflect Llama's acceptable use constraints to create an enforceable downstream obligation.
Version governance: Establish a process for reviewing licence terms when upgrading to new Llama versions. Assign responsibility for triggering this review at the point of version selection, not after deployment. Maintain a record of which Llama version is deployed in each production environment to ensure the correct licence version is being applied.
Key Takeaways
Meta Llama's Community Licence is royalty-free for commercial use for the vast majority of enterprise organisations. The 700 million MAU threshold creates a restriction that is irrelevant to typical enterprise deployments but requires assessment for large consumer-facing platforms. Attribution requirements must be implemented before production deployment and maintained in all distributed versions of Llama-based products. The acceptable use policy should be reviewed against each specific deployment context — particularly for research, security, and sensitive-sector applications.
Each Llama version carries version-specific licence terms that require review before deployment. An approval for Llama 3 does not automatically extend to Llama 4. Fine-tuned models can be created and used internally without licence inheritance concerns; distribution of fine-tuned weights to third parties requires legal review of distribution term compatibility. OpenAI's enterprise agreements carry lock-in provisions that reduce portability — enterprises that want long-term AI flexibility should evaluate the structural advantages of Llama's open-weights model as part of their AI strategy.
Building an enterprise AI strategy that includes open-weights models?
We provide independent analysis of Llama, OpenAI, Azure OpenAI, and other GenAI platforms — including licence review and procurement advisory.