Tomasz Godzik
VirtusLab
Maintainer of multiple Scala tools including Metals, Bloop, Scalameta, Munit, Mdoc and parts of the Scala 3 compiler.
Release officer for Scala LTS versions
Part of the Scala Core team as coordinator and VirtusLab representative
Part of the moderation team
How smart are LLMs really
Prompting 101
Context Engineering
New tools ecosystem
Approaches to working with LLMs
Questions
LLM has fallen into well known paths
Something more?
The hype can get too much at times though.
The hype can get too much at times though.
I think it was already shown on previous presentations.
Using languages with strong typing helps to reduce the risk of using LLMs in your development
But just take a look at the recent AWS outages where coding agents were heavily involved.
> Create a Scala LSP server
> Create a Scala LSP server
Obviously not
You are a Scala compiler and tooling expert tasked with creating a Scala LSP server. Use scala-cli to set up, run and test the project.
You are a Scala compiler and tooling expert tasked with creating a Scala LSP server. Use scala-cli to set up, run and test the project.
persona
task
context
You are a Scala compiler and tooling expert tasked with creating a Scala LSP server. Use scala-cli to set up, run and test the project. Here is some compiler API you can use.
import scala.reflect.internal.util.BatchSourceFile import scala.reflect.io.AbstractFile import scala.tools.nsc.* val settings = new Settings() settings.classpath.value = "/path/to/classes.jar:/path/to/scala-library.jar" settings.usejavacp.value = true // or set classpath explicitly val reporter = new InteractiveReporter { override val settings: Settings = settings } val global = new Global(settings, reporter) { // presentation compiler: override as needed (Metals uses a custom Global) } val source = new BatchSourceFile( AbstractFile.getFile("/tmp/Example.scala"), "object Example { val x = 42 }\n" ) // Load and typecheck (simplified; real PC uses ask* on a dedicated thread) val run = new global.Run() run.compileSources(List(source)) // Inspect units: global.unitOfFile, trees, symbols — API is Global-specific
Prompt chaining: Each conversation produces artifacts for the next prompt.
Reasoning models: Ask the model to explain its reasoning, default in some models.
Meta prompting: Prompt to create new prompts for later models.
Agentic simulations: Make agents fulfill some persona and simulate things like recruitment calls etc.
Some of the existing techniques made it into tools.
The more people come up with the more the tooling catches up.
The ideas are basically endless.
When the input provided with the initial prompt is big enough, it will confuse the model and lead it astray from the original task.
When the conversation gets reused for new tasks, the previous results end up confusing the model’s attention mechanisms
More data in context gets attached before the prompt and uses more tokens, which cost money.
1. Provide only the information (and capabilities) necessary to execute the task at hand.
2. Prefer to work in shorter conversations that don’t reach the full size of the available context window.
3. Use compaction and external progress tracking to distill and preserve important decisions and insights.> Summarize the current decisions and progress so that we can use it in another conversation
Always provide the same prefix, each LLM query is cached.
If you change it, the model will need to recalculate everything.
This cache will only be available for a time.
ask: does not change anything, used for exploration.
plan: create documents needed for further work, any specification timeline etc.
agent: actually apply changes, test code, compile until the results are satisfactory.
ask: does not change anything, used for exploration.
plan: create documents need for further work, any specification timeline etc.
agent: actually apply changes, test code, compile until the results are satisfactory.
Most of the time you will use
They can use a variety of tools to enrich their context and verify the results of their work.
Every development tool might be useful for you, but some might need work.
One of the more popular tools to improve the output of LLMs
MCP uses JSON-RPC, so each possible request and response is represented by a JSON schema
Model Context Protocol is very similar to Language Server Protocol, but instead of people it's for agents.
Host with MCP Client
MCP protocol
MCP protocol
MCP protocol
MCP Server 1
MCP Server 2
MCP Server 2
Local Data Source 1
Local Data Source 2
Web API
compilation: compile-full, compile-module, compile-file test: test analysis: glob-search, inspect, get-docs, get-usages utility: import-build, format-file, find-dep, run-scalafix-rule, list-scalafix-rules
Instead of ingesting the entire context, create smaller tools to change codebase or extract information.
Using typed languages like Scala makes it significantly more efficient.
// ./skills/getDocsFor.scala
import metals.mcp.*
@main
def main(nameToLookFor: String) =
val allMatching = globSearch(fullyQualifiedName, None)
val fqcns = allMatching.split("\n")
val result = fqcns.map{
fullName => getDocs(fullName)
}.mkString("\n")
println(result)
}
---
name: your-skill-name
description: Brief description of what this Skill does and when to use it
---
# Your Skill Name
## Instructions
[Clear, step-by-step guidance for Claude to follow]
## Examples
[Concrete examples of using this Skill]These scripts could be added as skills to your company repo.
Some of the scripts could improve CI as it doesn't require a lot of code.
Automate any manual work even if normally writing a script would take longer than the work.
These are projects that were previously written using normal engineering practises and we want to add feature or fix bugs.
Might be harder to use LLMs in, tasks should be scoped, all data available that the LLM agent might need.
Provide as many tools as needed to make sure the context is spot on.
A lot depends on the complexity and the existing quality of the code.
// AGENTS.md
Look into Archtecture.md for system design.
Use Scala CLI as a build.
Use Metals MCP:
- compile tool to verify compilation
- format tool after all changes
// .cursor/mcp.json
{
"mcpServers": {
"metals2-metals": {
"url": "http://localhost:60849/mcp"
}
}
}//.cursor/skills/cellar.md
---
name: Cellar dependency inspection
description: Use cellar CLI tool to
download sources
---
# Cellar Dependency Inspection
## Instructions
### Look up a symbol from your sbt / mill / scala-cli project
cellar get --module lib cellar.handlers.SearchHandler// Architecture.md
The system consists of four components:
- database
- service
- frontend
- service2
To modify...We can also build something from scratch with LLMs in mind.
Makes it possible to create small to mid project almost automatically with guidelines from the developer
Exploration
Product planning
Component analysis
Implementation
Full product ideaFull project specificationSpecification with all modules plannedHalf a year ago this was not feasible, but becoming more and more so.
It is more expensive and complex to monitor.
But can for example be used to implement multiple components at the same time or make product exploration faster.
Use tools to verify and improve LLM output.
Make sure your prompts are sound and context provided is well scoped.
You need to know your domain to be able to verify the output.
Bluesky: @tgodzik.bsky.social
Mastodon: fosstodon.org/@tgodzik
Discord: tgodzik
virtuslab.com/blog/scala - plenty of blog posts about Scala and LLMs