Fine tuning of LLMs and the economics involved


Most of us are using LLMs and some of us are getting to the point where LLMs are going to production. Honeymoon phase is going to get over soon and practical realities like cost & maintainability are going to become mainstream. However, the cost of running LLMs is not well understood or often not put in perspective. This talk will give the audience some great insights into running LLMs for their usecases. Are you aware that same task can take either $3500 to do with one model or $1,260,000 with another? This does come at the cost of difference in performance, but it leaves a lot of room in the middle for thinking about what the tradeoff between cost and performance is? You'll get to understand the different verticals of LLMs that have a cost associated with them, such as the input and output tokens cost of the commercial LLM APIs, or when it comes to the open source LLMs you'll see that the cost of provisioning the required machine to run/host the model and the cost around the spot instances. By understanding the economics of different models, businesses and LLM users can make informed decisions tailored to their needs. Comparing costs across models, such as GPT-4, Anthropic Claude V1, InstructGPT-DaVinci, Curie, and self-hosted 7B models, reveals the significant variations in pricing.


Grab your ticket for a unique experience of inspiration, meeting and networking for the AI & data science industry

Book your tickets at the earliest. We have a hard stop at 1200 passes.

Note: Ticket Pricing to change at any time.

  • Early Bird Passes

    Available till 4th Aug 2023
  • All access, 3 day passes
  • Conference Lunch on all 3 days
  • Group Discount available
  • Late Passes

    AVAILABLE FROM 23rd SEP 2023
  • All access, 3 day passes
  • No Group Discount available
  • 25000


Fine tuning of LLMs and the economics involved