Fortaleza brought the heat in more ways than one. At the 35th Brazilian Conference on Intelligent Systems (BRACIS 2025), co-located with the Brazilian Symposium on Databases, roughly a thousand researchers, students, and industry builders compared notes on where AI is heading in Portuguese-speaking markets and beyond. We spent four packed days listening, learning, and swapping ideas at the Thomson Reuters Labs booth.
Theme 1: RAG is moving from “does it work” to “how do we make it excellent”
Large language models were everywhere, and the conversation has shifted from proofs of concept to the fine engineering details that make retrieval-augmented generation dependable. Talks explored better example selection for few-shot learning in named entity recognition, multi-objective prompt optimization, and practical ways to get more out of small language models for tasks like text-to-SQL. A strong current ran through sessions on self-supervised fine-tuning for low-resource settings in Portuguese, which matters when you want quality without ballooning labeling costs.
Why it matters: evaluation discipline wins. Techniques like smarter exemplar selection and targeted fine-tuning can squeeze more signal from smaller models, which is good for latency, cost, and control.
Theme 2: Legal AI in Brazilian Portuguese is coming of age
BRACIS has long welcomed legal NLP, but this year felt like an inflection point. Alongside papers, the program featured a legal-search hackathon, new datasets for Brazilian personal income tax Q&A with references, and applied work on pseudo-labeling to classify Brazilian Supreme Court (STF) documents. We also saw encouraging results from LoRA fine-tuning on specific layers for legal NER, plus efforts to train a Brazilian legal model on reputable sources such as academic papers, federal regulations, and Supreme Federal Court decisions.
Why it matters: the ecosystem is assembling the right ingredients for professional-grade systems in Portuguese. Grounded datasets with citations, efficient fine-tuning strategies, and domain-specific models support workflows where provenance and precision are non-negotiable.
Theme 3: Meta-learning and bio-inspired methods still matter
Beyond LLMs and RAG, BRACIS gave healthy space to meta-learning and bio-inspired algorithms. Meta-learning sessions underscored a practical goal: make models adapt faster with less data when tasks or domains shift. Bio-inspired talks reminded everyone that not every real-world problem yields to transformers, and that hybrid approaches can unlock performance in areas like optimization and scheduling.
Why it matters: adaptability is a feature, not a phase. As products meet new jurisdictions, document types, and regulatory changes, methods that shorten adaptation time will pay dividends for users and engineering teams.
Two sessions that stuck with us
- Low-resource and Small Language Models. Four papers tackled few-shot NER, prompt optimization, text-to-SQL with SLMs, and self-supervised fine-tuning for Portuguese. The comparative analysis around example selection for NER offered heuristics worth testing in similar pipelines.
- Legal Applications of NLP. A diverse slate showcased pseudo-labeling for STF document classification, LoRA strategies that improve legal NER with less compute, a cited Brazilian tax-law Q&A dataset, and a domain-trained Brazilian legal model. The common thread was rigor: use trusted sources, design for evaluation, and keep humans in the loop.
Conversations around the booth
Our booth became a meeting point for researchers, students, and practitioners from every region of Brazil. We spoke with public-sector technologists, members of national research institutes, and graduate students eager to apply their work to real-world legal and tax problems. The goal was twofold: highlight our growing technology presence in Brazil and connect with MSc and PhD candidates interested in applied AI. The interest was tangible, and we left with a full follow-up list of people and potential collaborations.
Why this matters
Brazil is one of the most dynamic AI communities in the world right now. The momentum in Fortaleza was not about flashy demos. It was about building dependable systems in Portuguese that cite sources, respect context, and meet the bar set by courts, regulators, and corporate clients. That is the kind of AI that will endure.
For us, BRACIS 2025 was a reminder that the future of professional-grade AI will be multilingual, domain-specific, and evaluation-first. It will favor designs that get more from less, and it will be built by researchers and engineers who know how to blend model craft with real-world constraints.
Obrigado, Fortaleza. We will be back.
—-
This blog was authored by Thiago Covões, Senior Applied Scientist; Alex Martins, Lead Research Engineer; and Leonardo De Marchi, VP, Thomson Reuters Labs