Using Qwen2.5–7B-Instruct powered code agents to create a local, open source, multi-agentic RAG system Photo by Jaredd Craig on UnsplashLarge Language Models have shown impressive capabilities and they are still undergoing steady improvements with each new generation of models released. Applications such as chatbots and summarisation can directly exploit the language proficiency of LLMs as…
Discover how to set up an efficient MLflow environment to track your experiments, compare and choose the best model for deployment Training and fine-tuning various models is a basic task for every computer vision researcher. Even for easy ones, we do a hyper-parameter search to find the optimal way of training the model over our…
My experience using VSCode (GitHub Copilot) and Cursor (Claude 3.5 Sonnet) as a Data Scientist. Image artificially generated using FLUX.1 by Black Forest Labs (via Grok 2).As developers, we’re constantly searching for tools to enhance our productivity and make coding more enjoyable. I have been using Visual Studio Code (VSCode) for over six years, it…
Let’s dive into the most important libraries in R and Python to visualise data and create different charts, and what the pros and cons are Being a pro in certain programming languages is the goal of every aspiring data professional. Reaching a certain level in one of the countless languages is a critical milestone for…
If you have been a data scientist for a while, sooner or later you’ll notice that your day-to-day has shifted from a VSCode-loving, research paper-reading, git-version-committing data scientist to a collaboration-driving, project-scoping, stakeholder-managing, and strategy-setting individual. This shift will be gradual and almost unnoticeable but one that will require you to put on different hats…
Implementing Speculative and Contrastive Decoding Large Language models are comprised of billions of parameters (weights). For each word it generates, the model has to perform computationally expensive calculations across all of these parameters. Large Language models accept a sentence, or sequence of tokens, and generate a probability distribution of the next most likely token. Thus,…
Concerns about the environmental impacts of Large Language Models (LLMs) are growing. Although detailed information about the actual costs of LLMs can be difficult to find, let’s attempt to gather some facts to understand the scale. Generated with ChatGPT-4oSince comprehensive data on ChatGPT-4 is not readily available, we can consider Llama 3.1 405B as an…
Understand missing data patterns (MCAR, MNAR, MAR) for better model performance with Missingno In an ideal world, we would like to work with datasets that are clean, complete and accurate. However, real-world data rarely meets our expectation. We often encounter datasets with noise, inconsistencies, outliers and missingness, which requires careful handling to get effective results.…
This November 30 marks the second anniversary of ChatGPT’s launch, an event that sent shockwaves through technology, society, and the economy. The space opened by this milestone has not always made it easy — or perhaps even possible — to separate reality from expectations. For example, this year Nvidia became the most valuable public company…
|LLM|INTERPRETABILITY|SPARSE AUTOENCODERS|XAI| A deep dive into LLM visualization and interpretation using sparse autoencoders Image created by the author using DALL-EAll things are subject to interpretation whichever interpretation prevails at a given time is a function of power and not truth. — Friedrich Nietzsche As AI systems grow in scale, it is increasingly difficult and pressing…