IBM UFCG

Research and development on AI, high-performance hardware, and infrastructure — a partnership between UFCG and IBM.

    Tradução Versão em Português
    • Home
    • Tags
    • Projects
    • Team
    • Introduction

      Welcome to the blog of the partnership between the Federal University of Campina Grande (UFCG) and IBM!

      This space brings together articles, tutorials, and research results produced by our team across different projects. Each project focuses on a distinct area of research:

      • LLM Evaluation — evaluation of large language models, with a focus on benchmarks for Brazilian Portuguese.
      • AgentOps — development of AI agents capable of autonomously performing multiple tasks.
      • Judo-AI — use of AI models for analysis of judo matches and training sessions, applying computer vision and deep learning techniques for movement detection and action recognition.
      • 5G — integration of AI techniques in 5G network environments, with intelligent control, optimization, and network management mechanisms.
      • MultiArq — provisioning of common tools for new architectures (ppc64le), seeking and adapting specific tools and creating technical documentation about the architecture.

      Browse the posts and follow the latest updates!

    • LLM Inference with Ollama on IBM Power9 Using CPU

      In this post, we present a practical guide for performing LLM inference with Ollama on IBM Power9 (ppc64le architecture), using the CPU.

    • Power9 Virtualization: how we structured an isolated environment with KVM and Libvirt

      In this post, we explore how we built a virtualized environment using KVM and Libvirt on a Power9 server, focusing on isolation, reproducibility, and shared team usage.

    • Evaluation of IBM Granite Models for Code-Generation Tasks on HumanEvalX

      We evaluated the IBM Granite family on code-generation tasks using the HumanEvalX benchmark, covering five programming languages and analyzing how models of different sizes perform across these scenarios.

    • Computação@UFCG Leads Brazil's Contributions to the HELM-Stanford Framework in Partnership with IBM

      Collaboration between UFCG’s Computer Science department and IBM makes the university the top Brazilian contributor to the HELM-Stanford evaluation framework in 2025.

    • LLMs Inference API on IBM Power9 Server

      This post is part of a tutorial series whose ultimate goal is to build a LLM API on Power9 servers. In this stage, we will present the API and show how to make requests.

    • Building an API for LLM inferences on IBM Power9 servers

      This is the third post in a tutorial series that walks through the process of building an LLM API on an IBM Power9 server. In this stage, we will develop the API using FastAPI and the Transformers library.

    • Setting Up the Conda and PyTorch on IBM Power9 Servers

      This post is part of a tutorial series aimed at building a Language Model API on Power9 servers. In this step, we’ll set up the Conda package manager and the PyTorch library.

    • Setting Up the OS, NVIDIA Drivers, CUDA, and cuDNN on IBM Power 9 Servers

      This post is part of a tutorial series aimed at building a Large Language Model API on Power9 servers. In this step, we’ll set up the operating system and install NVIDIA drivers, CUDA, and cuDNN.

    • Evaluating Small-Scale LLMs (up to 8B) on PT-BR Benchmarks

      In this post, we present the results of evaluating small-scale LLMs on sentiment analysis and MQA tasks in Brazilian Portuguese, using the HELM framework.

    • Performing CPU Inference on Power10

      This post explains how to run the Granite-20b-Code-Instruct model on CPU on a Power10 machine.

    • IBM & UFCG - 2025