Executive Summary
FMSB’s core message is that AI in trading is real, growing, and relevant, but still relatively early in its market-facing deployment. The report argues that the financial industry has long used quantitative models and machine learning, yet the newest generation of AI techniques is only beginning to be integrated into trading systems.
Rather than portraying AI as a revolutionary force that has already taken control of markets, the paper adopts a more grounded view: today’s AI is usually embedded in specific modules such as liquidity analysis, venue selection, pricing forecasts, or execution metrics, while humans remain firmly in charge of supervision and escalation.
A second major idea is that the risks of AI come less from the label “AI” itself and more from how broadly and how critically the model is used. A simple model supporting an input signal may be relatively low risk; an AI-driven system that directly shapes trading actions at scale is much more consequential.
FMSB therefore recommends focusing less on abstract debates over defining AI and more on the actual use case, the complexity of the task, the degree of autonomy involved, and the strength of controls around outputs. Existing frameworks such as model risk management and algorithmic trading controls already cover much of this terrain, though firms may need to update them as AI becomes more complex and less interpretable.
Introduction
Trading has progressively moved from manual processes to electronic workflows, and from there to increasingly automated, data-driven systems. AI is the next step in that progression: a tool that may replicate aspects of human judgment while operating at far greater speed and scale. That creates clear upside in terms of efficiency and revenue potential, but it also raises new kinds of operational, model, and governance challenges.
The scope of the report is market-facing AI: models or systems whose outputs directly or indirectly influence pricing, execution, or client-facing trading decisions. The report’s goal is to understand whether market-facing AI changes the risk profile of trading activities and, if so, whether the existing control environment remains fit for purpose.
AI adoption is still uneven, adoption differs across firms, and best practices are not yet settled. FMSB frames the paper as part of an ongoing conversation among market participants, risk teams, and policymakers.
Defining AI in markets
Technology evolves quickly, and any rigid definition risks becoming outdated almost immediately. At the same time, defining AI too broadly can create confusion by lumping together cutting-edge systems with long-established quantitative methods that firms already understand and govern well. FMSB therefore cautions against building too much around a fixed line between “AI” and “non-AI.” In practice, the report suggests that such distinctions may be artificial, unstable, and not particularly useful for risk management.
Instead, the paper introduces a spectrum of techniques, ranging from traditional methods like OLS regressions and simple decision trees, through simple AI and machine learning methods, into more advanced techniques such as deep learning, reinforcement learning, and transformers, and finally to generative AI. That framing is important because it moves the debate away from labels and toward gradations of complexity, autonomy, and application. The real question becomes: what is the model doing inside the trading system, how influential is it, and how risky is the function it supports?
This is one of the report’s most useful conceptual contributions. Rather than treating AI as a magical category requiring wholly separate thinking, FMSB positions it as part of a continuum of analytical and decision-support techniques. That makes the report more flexible and more realistic. It also sets up the later argument that existing model governance tools remain highly relevant, provided they are adapted to the scale, speed, and opacity of newer AI use cases.
Definition
Official definitions from bodies such as the OECD, NIST, and the EU AI Act exist, but when applied to market practice, those definitions can blur the distinction between genuinely novel AI systems and long-standing quantitative tools already embedded in trading infrastructure.
FMSB suggests that drawing a rigid boundary can be misleading because innovation moves too fast and because many tools sit in grey areas. For example, some machine learning techniques might be considered “AI” by some practitioners and not by others. What matters more is whether the model introduces meaningfully different risk characteristics in context.
This framing is important because it challenges a common habit in AI discourse: assuming that once something is called AI, it automatically requires a radically new governance approach. FMSB’s position is more measured. Definitions are useful, but practical risk oversight should focus on function, materiality, and deployment context rather than terminology alone.
Applying AI techniques to trading systems
In current market practice, AI models are typically embedded inside already automated electronic trading systems that are governed by policies, procedural logic, human supervision, and independent risk controls. That means the risk profile of an AI model depends not only on the model itself but on the broader system in which it sits. A model that generates a forecast as one input into a larger process is very different from one that directly determines execution actions.
FMSB is especially clear on autonomy. Despite the popular narrative around autonomous AI, the report states that current market-facing AI use cases do not operate on a fully autonomous basis. Human traders and supervisors retain oversight and intervention authority, and control frameworks are designed outside the AI layer. The paper links this to existing algorithmic trading requirements, including human supervision expectations under RTS 6.
Evaluating AI risk requires systems thinking. It is not enough to ask whether a model is advanced or opaque. You also need to ask where it sits in the workflow, what decisions it influences, what non-AI controls surround it, and how easily humans can monitor or override it. That systems-level view runs through the rest of the paper.
Applications and risks
Advances in computing power and AI techniques have naturally extended into financial markets, especially because trading has always been a technologically intensive domain. Firms are exploring AI to improve efficiency, execution quality, forecasting, client servicing, and revenue generation. But FMSB resists the temptation to make this sound more radical than it is. In many current cases, AI is an incremental enhancement to already automated and analytics-heavy systems rather than a wholesale replacement of traditional trading logic.
The research divides AI use cases into categories and then examines the associated risks. This helps readers understand that not all AI in trading carries the same implications. A support tool that summarizes research or extracts data from unstructured client requests is quite different from an AI-enhanced smart order router or a directly adaptive market-making agent.
The paper keeps returning to a simple principle: risk rises with the complexity of the task, the scope of the AI inside the workflow, and the degree to which system outputs can directly affect the market. Many AI risks are not entirely new. Model risk, trading risk, conduct risk, and operational resilience issues already exist in electronic markets. AI can intensify them, make them harder to detect, or make explanations less intuitive, but it does not always create a separate category of risk from scratch.
Use cases
FMSB groups use cases into three broad buckets: support tools, input modules, and system logic:
Support tools include things like research report generation, trade documentation review, and natural language processing of client communications or trade requests. These are useful because they move work from manual review to machine-assisted processing, improving speed and scale. But in most cases, they are not directly market-facing.
Input modules, is the most common real-world application of AI in trading today. These models may forecast prices or volumes, generate liquidity metrics, model hit rates, or produce signals shown inside execution management systems. Crucially, they do not directly execute trades; they feed into broader non-AI trading systems.
System logic, is more consequential. Here AI influences the decision-making layer itself, for example through smart order routing, execution sequencing, or trading bots acting on behalf of people. These are the use cases where the market-facing stakes become much higher because the model is closer to actual execution and can shape behavior at speed and scale.
AI risks in trading systems
As trading has become more automated, the nature of risk has shifted. Manual errors may decline, but technology risk, model risk, infrastructure dependence, and governance complexity all increase. AI continues that pattern. Sometimes its impact is limited; sometimes it can materially heighten risk. The report emphasizes that the level of risk depends on the use case, not on AI in the abstract. A forecasting model used as one input in a VWAP algorithm is very different from an AI-driven system selecting venues or adjusting execution tactics in real time.
FMSB gives significant attention to model risk management. It notes that AI raises familiar issues around data quality, training data transparency, performance, governance, technical expertise, and operational resilience. Firms already have model risk frameworks, and these can apply to AI as well, but they may need to be strengthened for higher complexity, faster adaptation, and reduced interpretability.
As AI models grow more complex, FMSB warns that trying to fully interpret internal decision pathways may hit practical limits. Rather than demanding impossible transparency, firms should focus on whether outputs are reproducible, testable, monitorable, and bounded by independent controls. In other words, the report shifts the center of gravity from “Can we explain every internal step?” to “Can we validate and contain the outcomes?” That is a pragmatic stance likely to resonate with both risk teams and operators.
There several systemic risk like as synchronized AI behavior, coordinated adaptation across AI systems, shared infrastructure failures, and concentration in data or AI providers. But FMSB is careful not to overstate the present danger. It explicitly says these more systemic risks appear limited today because market-facing AI is not yet sufficiently autonomous and remains under human supervision.
Accountability
As AI takes over a larger share of trading-related tasks, responsibility cannot simply disappear into the machine. FMSB argues that accountability should remain with clearly identified humans, just as it does in manual or conventional electronic trading environments. AI may change how decisions are generated, but it should not weaken ownership over design, implementation, supervision, or response.
The report says AI models and systems should have clearly defined human owners. That means someone should own the code, someone should own the business use, and responsible roles should have the tools to perform genuine oversight. These tools include monitoring metrics, alerts, intervention mechanisms, and management information capable of supporting timely action. This is especially important where models are adaptive or display higher degrees of apparent autonomy. The more capable the system becomes, the more important it is that the accountability chain stays explicit and operational.
Assessing AI in trading systems
The paper asks firms to evaluate AI use cases by looking at purpose, complexity, autonomy, risk profile, and the adequacy of controls. This is useful because it translates the report’s conceptual arguments into something closer to a checklist for decision-makers, risk managers, and supervisors. Rather than asking “Is this AI?” the framework asks more relevant questions: Is it market-facing? How complex is the task? What could go wrong? How material would errors be? What controls sit around inputs, outputs, and outcomes?
This approach reflects the report’s central philosophy: context matters more than labels. Two AI systems may use similar techniques but create very different risks depending on whether they sit in a signal-generation layer, an execution layer, or a client-pricing process. By structuring the assessment around purpose, risks, and controls, the paper encourages firms to think holistically about deployment.
Contextual considerations
The paper propose a structured three-step framework for evaluating AI use cases in trading systems.
The first step is to understand the purpose of the use case and how it works. Firms should ask whether the system is market-facing, how complex the task is, whether AI is used as an input module or as part of system logic, how much autonomy it exhibits, and whether human supervision and intervention remain in place. This is a powerful filter because it quickly separates low-impact experimental tools from genuinely consequential market-facing systems.
The second step is to identify the key risks. Here the report urges firms to think in outcome terms: what undesirable outcomes could occur, does AI materially alter the risk profile, how do AI risks compare with traditional model risks, and how significant would errors be for the firm or for external stakeholders. This is a reminder that risk analysis must be specific and proportional. Not every opaque model is a crisis, but not every useful AI application is harmless either. Materiality matters.
The third step focuses on controls. FMSB asks whether outputs can be reproduced for validation, whether the control framework matches the use case’s scale and complexity, whether existing model risk metrics are sufficient, whether algorithmic trading controls adequately limit risk, whether failures can be detected in time, and whether staff have the right skills. This is arguably the most practical part of the paper because it converts AI governance into operational questions firms can actually work through.
Case studies
The case studies show how FMSB’s framework applies in different trading contexts:
The first case, an electronic trading system with multiple AI-powered components, is intentionally nuanced. AI is used in areas such as NLP-derived signals, alpha models, and reinforcement-learning-enhanced execution logic, but most of these remain components inside a larger controlled system. The report concludes that while AI can heighten some risks, especially nearer the execution layer, the incremental risk remains manageable when existing model risk management and gateway trading controls are robust. This is a good example of the paper’s balanced tone: not dismissive, not alarmist.
The second case study, on market-making agents, is more cautionary. Here reinforcement learning agents adjust quotes, sizes, and spreads with limited human input, directly adapting to live market conditions. This creates a higher-risk setting because behaviors can become correlated, procyclical, or even drift toward problematic conduct patterns such as unintended spoofing. Yet FMSB again resists sensationalism: it argues that these risks are not fundamentally different from those posed by highly automated non-AI strategies. What changes is the urgency of maintaining independent controls, behavior detection mechanisms, and clearly assigned human accountability.
The third case study shifts from pure trading mechanics to client pricing. A RAG-based CRM system surfaces relationship and servicing insights that influence quote calibration. This is especially interesting because the risk here is less about systemic market disruption and more about fairness, bias, explainability, and over-reliance by staff. FMSB notes that such systems may be hard to interpret and could create inconsistent outcomes across clients or regions. That makes this case a strong reminder that AI risk in trading is not only about speed and volatility; it is also about conduct, governance, and the treatment of clients. Taken together, the three cases show the breadth of AI-related questions trading firms now have to confront.
Conclusion
AI is becoming a more important feature of financial markets, but in current trading practice it is usually embedded within larger electronic systems, not deployed as a free-standing autonomous actor. In most present-day use cases, AI acts as a contained component that provides inputs, improves analytics, or supports adaptive decision-making under independent controls and human supervision.
In that sense, FMSB argues, AI is often better understood as an enhancement of existing statistical inference and automation rather than a complete rupture with past trading models.
AI is already being applied to more complex tasks and objective functions, and that complexity can strain control frameworks. Firms should evaluate risks in context, remain alert to scale, speed, novelty, and opacity, and continue updating safeguards accordingly.
Existing frameworks such as model risk management and algorithmic trading controls still provide a strong foundation, but they must evolve as AI use cases do.

