MLX LLM’s as chat models.
In particular, we will:
- Utilize the MLXPipeline,
- Utilize the ChatMLXclass to enable any of these LLMs to interface with LangChain’s Chat Messages abstraction.
- Demonstrate how to use an open-source LLM to power an ChatAgentpipeline
1. Instantiate an LLM
There are three LLM options to choose from.2. Instantiate the ChatMLX to apply chat templates
Instantiate the chat model and some messages to pass.
3. Take it for a spin as an agent
Here we’ll test outgemma-2b-it as a zero-shot ReAct Agent. The example below is taken from here.
Note: To run this section, you’ll need to have a SerpAPI Token saved as an environment variable: SERPAPI_API_KEY
react-json style prompt and access to a search engine and calculator.
Connect these docs programmatically to Claude, VSCode, and more via MCP for    real-time answers.