Langchain agents documentation. LangChain Python API Reference langchain-aws: 0.
Langchain agents documentation. LangChain Python API Reference langchain-aws: 0.
Langchain agents documentation. Here’s an example: Pandas Dataframe This notebook shows how to use agents to interact with a Pandas DataFrame. 72 # langchain-core defines the base abstractions for the LangChain ecosystem. Introduction LangChain is a framework for developing applications powered by large language models (LLMs). By **default** the agent will have access to all files Prompt Templates Prompt templates help to translate user input and parameters into instructions for a language model. Responses are generated using AI and may contain mistakes. Dec 9, 2024 · The prompt must have input keys: tools: contains descriptions and arguments for each tool. Contribute to langchain-ai/langchain-mcp-adapters development by creating an account on GitHub. tool_input – The Agents Chains are great when we know the specific sequence of tool usage needed for any user input. In this notebook we will show how those parameters map to the LangGraph react agent executor using the create_react_agent prebuilt helper method. BaseLanguageModel, tools: ~collections. This is driven by a LLMChain. js to build stateful agents with first-class streaming and human-in-the-loop Deprecated since version 0. When you use all LangChain products, you'll build better, get to production quicker, and grow visibility -- all with less set up and friction. For detailed documentation of all SQLDatabaseToolkit features and configurations head to the API reference. This tutorial shows how to implement an agent with long-term memory capabilities using LangGraph. tool_names: contains all tool names. This is documentation for LangChain v0. In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. conversational. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source building blocks, components, and third-party integrations. abc import Sequence from typing import Any, Literal, Union from langchain_core. Jul 23, 2025 · LangChain is a modular framework designed to build applications powered by large language models (LLMs). But for certain use cases, how many times we use tools depends on the input. AgentAction # class langchain_core. The agent returns the exchange rate between two currencies on a specified date. When the agent reaches a stopping condition, it returns a final return value. From tools to agent loops—this guide covers it all with real code, best practices, and advanced tips. Tools within the SQLDatabaseToolkit are designed to interact with a SQL database. LangSmith Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. 3. Agent Engine handles the infrastructure to scale agents in production so you can focus on creating applications. The agent can store, retrieve, and use memories to enhance its interactions with users. Use LangGraph to build stateful agents with first-class streaming and human-in-the-loop support. tools (Sequence[BaseTool]) – Tools this agent has access to. load_tools(tool_names: List[str], llm: BaseLanguageModel | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, allow_dangerous_tools: bool = False, **kwargs: Any) → List[BaseTool] [source] # Load tools based on their name. Dec 9, 2024 · langchain_core 0. """Load agent. Apr 2, 2025 · Learn about the LangChain integrations that facilitate the development and deployment of large language models (LLMs) on Databricks. 1, which is no longer actively maintained. base. To help with this, we’re releasing two pre-built agents, customized specifically for Open Agent Platform: Tools Agent Supervisor Agent The agent prompt must have an agent_scratchpad key that is a MessagesPlaceholder. ConversationalAgent [source] # Bases: Agent Deprecated since version 0. langchain-core: 0. render import ToolsRenderer, render A big use case for LangChain is creating agents. runnables import Runnable, RunnablePassthrough from langchain_core. 43 ¶ langchain_core. To start, we will set up the retriever we want to use, and then turn it into a retriever tool. New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications. _api. Sequence [~langchain_core. Here are the steps: Define and configure a model Define and use a tool (Optional) Store chat history (Optional) Customize the prompt template (Optional In this quickstart we'll show you how to build a simple LLM application with LangChain. Parameters: llm (BaseLanguageModel) – Language model to use for the agent. The prompt in the LLMChain MUST include a variable called “agent_scratchpad” where the agent can put its intermediary work. Its architecture allows developers to integrate LLMs with external data, prompt engineering, retrieval-augmented generation (RAG), semantic search, and agent workflows. For details, refer to the LangGraph documentation as well as guides for Agent Types This categorizes all the available agents along a few dimensions. They can call external APIs or query databases dynamically, making decisions based on the situation. create_csv_agent(llm: LanguageModelLike, path: str | IOBase | List[str | IOBase], pandas_kwargs: dict | None = None, **kwargs: Any) → AgentExecutor [source] # Create pandas dataframe agent by loading csv to a dataframe. Jun 26, 2025 · Discover how LangChain agents are transforming AI with advanced tools, APIs, and workflows. It can recover from errors by running a generated query, catching the traceback and regenerating it For more details, see our Installation guide. 1. Here we focus on how to move from legacy LangChain agents to more flexible LangGraph agents. Callable [ [list [~langchain_core. csv") llm = ChatOpenAI(model="gpt-3. 29 agents """Chain that takes in an input and produces an action and action input. load_tools # langchain_community. AgentOutputParser [source] # Bases: BaseOutputParser [Union [AgentAction, AgentFinish]] Base class for parsing agent output into agent action/finish. Intermediate agent actions and tool output messages will be passed in here. Setup: LangSmith By definition, agents take a self-determined, input-dependent Agent Types This categorizes all the available agents along a few dimensions. The action consists of the name of the tool to execute and the input to pass to the tool. from __future__ import annotations from collections. Hit the ground running using third-party integrations and Templates. Productionization The agent executes the action (e. g. The main advantages of using SQL Agents are: It can answer questions based on the databases schema as well as on the databases content (like describing a specific table). Observability and evals platform for debugging, testing, and monitoring any AI application. Must provide exactly one of ‘toolkit’ or ‘db’. AgentActionMessageLog [source] # Bases: AgentAction Representation of an action to be executed by an agent. This is a relatively simple LLM application - it's just a single LLM call plus some prompting. Use cautiously. Reference: API reference documentation for all Agent classes. This is generally the most reliable way to create agents. NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. 5-turbo", temperature=0) agent_executor = create_pandas_dataframe_agent( llm, df, agent_type="tool-calling", verbose A Python library for creating hierarchical multi-agent systems using LangGraph. AgentExecutor # class langchain. output_parser (AgentOutputParser | None) – AgentOutputParser for parse the LLM output. create_xml_agent(llm: ~langchain_core. This will assume knowledge of LLMs and retrieval so if you haven’t already explored those sections, it is recommended you do so. Create an AgentAction. structured_chat. Classes tools # Tools are classes that an Agent uses to interact with the world. \nYou have access to the following tools which help you learn more about the JSON Dec 9, 2024 · The schemas for the agents themselves are defined in langchain. ChatPromptTemplate, stop_sequence: bool | ~typing. They can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table). toolkit (Optional[SQLDatabaseToolkit]) – SQLDatabaseToolkit for the agent to use. json. To address these issues and facilitate communication with external applications, we introduce the concept of an Agent as a processor. You give them one or multiple long term goals, and they independently execute towards those goals. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. Parameters: tool – The name of the tool to execute. Hierarchical systems are a type of multi-agent architecture where specialized agents are coordinated by a central supervisor agent. These applications use a technique known as Retrieval Augmented Generation, or RAG. However, understanding how to use them can be valuable for debugging and testing. Agent uses the description to choose the right tool for the job. Prompt Templates output This walkthrough showcases using an agent to implement the ReAct logic. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents. 2. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call! Jul 23, 2025 · Agents are autonomous systems within LangChain that take actions based on input data. param log: str [Required] # Additional information to log about the action. An agent that holds a conversation in addition to using tools. No third-party integrations are We will go over this pretty quickly - for a deeper dive into what exactly is going on, check out the Agent's Getting Started documentation Install langchain hub first Return type: AgentExecutor Example from langchain_openai import ChatOpenAI from langchain_experimental. You can use an agent with a different type of model than it is intended for, but it likely won't produce create_structured_chat_agent # langchain. LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. For details, refer to the LangGraph documentation as well as guides for LangChain Python API Reference langchain-aws: 0. Each tool has a description. Tools allow agents to interact with various resources and services like APIs ConversationalAgent # class langchain. tools import BaseTool from langchain. Use LangGraph. There are several key components here: Schema LangChain has several abstractions to make working with agents easy create_xml_agent # langchain. LangChain's products work seamlessly together to provide an integrated solution for every step of the application development journey. langgraph langgraph is an extension of langchain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. That's where Agents come in! LangChain comes with a number of built-in agents that are optimized for different use Agent # class langchain. We recommend that you use LangGraph for building agents. create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: BaseCallbackManager | None = None, prefix: str = 'You are an agent designed to interact with JSON. Recommended Reading Building effective agents by Anthropic Agent Most of the basic "agentic" functionality can be built using a high-level AI Service and Tool APIs. This is useful when working with ChatModels, and is used to reconstruct conversation history from the agent’s perspective. Apr 11, 2024 · Quickstart To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index. List [~langchain_core. You can use its core API with any storage Autonomous Agents are agents that designed to be more long running. Jun 19, 2025 · Build AI agents from scratch with LangChain and OpenAI. 15 # Main entrypoint into package. abc. 0: Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. After you sign up at the link above LangChain Python API Reference langchain-aws: 0. May 9, 2025 · LangChain provides a robust framework for building AI agents that combine the reasoning capabilities of LLMs with the functional capabilities of specialized tools. create_csv_agent # langchain_experimental. The first step in setting up Open Agent Platform is to deploy and configure your agents. BaseTool]], str] = <function render Deprecated since version 0. List [str] = True, tools_renderer: ~typing. langchain: 0. deprecation import AGENT_DEPRECATION_WARNING from langchain. chat. Using agents This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. Here’s an example: Feb 18, 2025 · Today we're releasing the LangMem SDK, a library that helps your agents learn and improve through long-term memory. The best way to do this is with LangSmith. Additionally, when building custom LangGraph workflows, you may find it necessary to work with tools directly. We will equip it with a set of tools using LangChain's SQLDatabaseToolkit. Agents select and use Tools and Toolkits for actions. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source building blocks and components. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. In this example, we will use OpenAI Tool Calling to create this agent. Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in. Agent [source] # Bases: BaseSingleActionAgent Deprecated since version 0. 3 you should upgrade langchain_openai and [docs] class FileManagementToolkit(BaseToolkit): """Toolkit for interacting with local files. agent_toolkits # Toolkits are sets of tools that can be used to interact with various services and APIs. LangGraph is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Commercial platform for developing, deploying, and scaling long-running agents and workflows. The prompt must have input keys: tools: contains descriptions and arguments for each tool. Setup: LangSmith By definition, agents take a self-determined, input create_json_chat_agent # langchain. The main thing this affects is the prompting strategy used. The agent executes the action (e. For details, refer to the LangGraph documentation as well as guides for Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions. LangGraph offers a more flexible and full-featured framework for building agents, including support for tool-calling, persistence of state, and human-in-the-loop workflows. _api import deprecated from langchain_core. LangChain agents (the AgentExecutor in particular) have multiple configuration parameters. Open Agent Platform provides a modern, web-based interface for creating, managing, and interacting with LangGraph agents. AgentActionMessageLog # class langchain_core. The agent returns the observation to the LLM, which can then be used to generate the next action. Callable [ [~typing. Create an AgentAction This is documentation for LangChain v0. 0: LangChain agents will continue to be supported, but it is recommended for new use cases to be built with LangGraph. That means there are two main considerations when thinking about different multi-agent workflows: What are the multiple independent agents? How are those agents connected? This thinking lends itself incredibly well to a graph representation, such as that provided by langgraph. agent_toolkits. Debugging agents got you down? LangSmith can help. , runs the tool), and receives an observation. A collection of agents and experimental AI products. AgentExecutor [source] # Bases: Chain Agent that is using tools. agents ¶ Schema definitions for representing agent actions, observations, and return values. Get started with LangSmith LangSmith is a platform for building production-grade LLM applications. LangChain provides the smoothest path to high quality agents. This log can be used in Dec 9, 2024 · langchain 0. The universal invocation protocol (Runnables) along with a syntax for combining components (LangChain Expression Language) are also defined here. The applications combine tool usage and long term memory. If you need more flexibility, you can use the low-level ChatModel, ToolSpecification and ChatMemory APIs. Concepts The core idea of agents is to use a language model to choose a sequence of actions to take. Setup AgentOutputParser # class langchain. The interfaces for core components like chat models, LLMs, vector stores, retrievers, and more are defined here. Create a new model by parsing and validating input data from keyword arguments. language_models. Quick Start To best understand the agent framework, let’s build an agent that has two tools: one to look things up online, and one to look up specific data that we’ve loaded into a index. If providing this toolkit to an agent on an LLM, ensure you scope the agent's permissions to only include the necessary permissions to perform the desired operations. latest AgentExecutor # class langchain. 27 agents © Copyright 2023, LangChain Inc. Memory is needed to enable conversation. Jul 24, 2025 · Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. Two tools must be provided: a Search tool and a Lookup tool (they must be named exactly as so). NOTE: Since langchain migrated to v0. load. agents. Using LangGraph's pre-built ReAct agent constructor, we can do this in one line. In multi-agent systems, agents need to communicate between each other. Below we assemble a minimal SQL agent. , a tool to run). """ # noqa: E501 from __future__ import annotations import json from collections. note If you're using pre-built LangChain or LangGraph components like create_react_agent,you might not need to interact with tools directly. If agent_type is “tool-calling” then llm is expected to support tool calling. BaseTool], prompt: ~langchain_core. messages import ( AIMessage, BaseMessage, FunctionMessage, HumanMessage, ) SQLDatabase Toolkit This will help you get started with the SQL Database toolkit. Class hierarchy: Agents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. Multi-Agent LangChain4j does not support high-level abstractions like "agent" in AutoGen or CrewAI to build multi-agent Agents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. Agents let us do just this. """ import contextlib from collections. agents import create_pandas_dataframe_agent import pandas as pd df = pd. The above, but trimming old messages to reduce the amount of distracting information the model has to deal with. In Chains, a sequence of actions is hardcoded. These are applications that can answer questions about specific source information. tools. This agent uses JSON to format its outputs, and is aimed at supporting Chat Models. abc import Sequence from typing import Optional, Union from langchain_core. This state management can take several forms, including: Simply stuffing previous messages into a chat model prompt. The main advantages of using the SQL Agent are: It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table). The Agent can be considered a centralized manager An AgentExecutor with the specified agent_type agent and access to a PythonAstREPLTool with the loaded DataFrame (s) and any user-provided extra_tools. This application will translate text from English into another language. AgentAction [source] # Bases: Serializable Represents a request to execute an action by an agent. BaseTool csv_agent # Functionslatest Introduction LangChain is a framework for developing applications powered by large language models (LLMs). tools_renderer (Callable[[list[BaseTool]], str]) – This controls how the tools are © Copyright 2023, LangChain Inc. This guide provides explanations of the key concepts behind the LangChain framework and AI applications more broadly. See Prompt section below for more. To tackle this, you can break your agent into smaller, independent agents and composing them into a multi-agent system. The log is used to pass along extra information about the action. It is mostly optimized for question answering. agents ¶ Agent is a class that uses an LLM to choose a sequence of actions to take. This will assume knowledge of LLMs and retrieval so if you haven't already explored those sections, it is recommended you do so. Learn to build smarter, adaptive systems today. prompts import BasePromptTemplate from langchain_core. For the current stable version, see this version (Latest). This agent uses the ReAct framework to interact with a docstore. Next, we will use the high level constructor for this type of agent. ATTENTION The schema definitions are provided for backwards compatibility. In this comprehensive guide, we’ll The role of Agent in LangChain is to help solve feature problems, which include tasks such as numerical operations, web search, and terminal invocation that cannot be handled internally by the language model. prompts. 29 Deprecated since version 0. xml. Classes Deprecated since version 0. create_json_agent # langchain_community. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source components and third-party integrations. Jun 17, 2025 · LangChain supports the creation of agents, or systems that use LLMs as reasoning engines to determine which actions to take and the inputs necessary to perform the action. agents . This is similar to AgentAction, but includes a message log consisting of chat messages. Below is a detailed walkthrough of LangChain’s main modules, their roles, and code examples, following the latest Build multi-agent systems A single agent might struggle if it needs to specialize in multiple domains or manage many tools. These highlight how to integrate various types of tools, how to work with different types of agents, and how to customize agents. messages import ( AIMessage, BaseMessage langchain: 0. Return type: This is documentation for LangChain v0. BasePromptTemplate, tools_renderer: ~typing. abc import Sequence from typing import Any, Optional from langchain_core. Finally, we will walk through how to construct a conversational retrieval agent from components. Intended Model Type Whether this agent is intended for Chat Models (takes in messages, outputs message) or LLMs (takes in string, outputs string). create_json_chat_agent(llm: ~langchain_core. \nYour goal is to return a final answer by interacting with the JSON. LangChain comes with a number of built-in agents that are Introduction LangChain is a framework for developing applications powered by large language models (LLMs). 5rc1 autonomous_agents Agents LangChain has a SQL Agent which provides a more flexible way of interacting with SQL Databases than a chain. Dec 9, 2024 · An agent that breaks down a complex question into a series of simpler questions. It provides tooling to extract information from conversations, optimize agent behavior through prompt updates, and maintain long-term memory about behaviors, facts, and events. """ # noqa: E501 from __future__ import annotations import json from typing import Any, List, Literal, Sequence, Union from langchain_core. Nov 6, 2024 · LangChain is revolutionizing how we build AI applications by providing a powerful framework for creating agents that can think, reason, and take actions. read_csv("titanic. *Security Notice*: This toolkit provides methods to interact with local files. For details, refer to the LangGraph documentation as well as guides for Some language models are particularly good at writing JSON. The schemas for the agents themselves are defined in langchain. 27 # Main entrypoint into package. callbacks import BaseCallbackManager from langchain_core. create_structured_chat_agent(llm: ~langchain_core. An agent that breaks down a complex question into a series of simpler questions. There are several key components here: Schema LangChain has several abstractions to make working with agents easy Jan 23, 2024 · Each agent can have its own prompt, LLM, tools, and other custom code to best collaborate with the other agents. Classes 3 days ago · This page shows you how to develop an agent by using the framework-specific LangChain template (the LangchainAgent class in the Vertex AI SDK for Python). tools import BaseTool from langchain_core. Build controllable agents with LangGraph, our low-level agent orchestration framework. language_models import BaseLanguageModel from langchain_core. A common application is to enable agents to answer questions using data in a relational database, potentially in an agents Repeated tool use with agents Chains are great when we know the specific sequence of tool usage needed for any user input. BaseTool]], str] = <function render_text_description>, *, stop_sequence An agent that breaks down a complex question into a series of simpler questions. Productionization: Use LangSmith to inspect, monitor Develop, deploy, and scale agents with LangGraph Platform — our purpose-built platform for long-running, stateful workflows. serializable import Serializable from langchain_core. Agents LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. agent_scratchpad: contains previous agent actions and tool outputs as a string. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. latest In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. Classes langchain: 0. We will first create it WITHOUT memory, but we will then show how to add memory in. create_sql_agent (llm [, ]) Construct a SQL agent from an LLM and toolkit or database. You can peruse LangSmith how-to guides here, but we'll highlight a few sections that are particularly relevant to LangChain below: Evaluation The agent executes the action (e. js to build stateful agents with first-class streaming and human-in-the-loop Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. How to: pass in callbacks at runtime How to: attach callbacks to a module How to: pass callbacks into a module constructor How to: create custom callback handlers How to: await callbacks sql_agent. In chains, a sequence of actions is hardcoded (in code). To improve your LLM application development, pair LangChain with: LangSmith - Helpful for agent evals and observability. agent. In these cases, we want to let the model itself decide how many times to use tools and in what order. path (Union[str, IOBase How to: use legacy LangChain Agents (AgentExecutor) How to: migrate from legacy LangChain agents to LangGraph Callbacks Callbacks allow you to hook into the various stages of your LLM application's execution. Agents use language models to choose a sequence of actions to take. LangSmith documentation is hosted on a separate site. messages import ( AIMessage, BaseMessage, FunctionMessage, HumanMessage, ) Introduction LangChain is a framework for developing applications powered by large language models (LLMs). 17 ¶ langchain. """from__future LangChain Python API Reference langchain-community: 0. You can use an agent with a different type of model than it is intended for, but it likely won't produce Deprecated since version 0. More complex modifications LangChain Python API Reference langchain-experimental: 0. 0: Use create_react_agent instead. Parameters: llm (BaseLanguageModel) – LLM to use as the agent. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. LangChain 🔌 MCP. Class hierarchy: 3 days ago · Vertex AI Agent Engine (formerly known as LangChain on Vertex AI or Vertex AI Reasoning Engine) is a set of services that enables developers to deploy, manage, and scale AI agents in production. Load the LLM First, let's load the language model we're going to LangChain’s ecosystem While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications. In How to add memory to chatbots A key feature of chatbots is their ability to use the content of previous conversational turns as context. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Agent that calls the language model and deciding the action. Parameters: llm (LanguageModelLike) – Language model to use for the agent. prompt (BasePromptTemplate) – The prompt to use. LangSmith gives you the explainability to understand why your agents go off track and how to get them humming again. Deploy and scale with LangGraph Platform, with APIs for state management, a visual studio for debugging, and multiple deployment options. csv. ChatPromptTemplate, tools_renderer: ~typing. It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build. json_chat. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements. This agent uses a search tool to look up answers to the simpler questions in order to answer the original complex question. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions. Custom agent This notebook goes through how to create your own custom agent. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. load_tools. A basic agent works in the following manner: Given a prompt an agent uses an LLM to request an action to take (e. For details, refer to the LangGraph documentation as well as guides for One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. BaseLanguageModel, tools: ~typing. It can recover from errors by running a generated query The agent executes the action (e. It’s designed with simplicity in mind, making it accessible to users without technical expertise, while still offering advanced capabilities for developers. skpxyke kxlk gnnzv zyfpc vkhbn klks tdhgq puzfa maml dwz