In the present day we’re including Qwen fashions from Alibaba in Amazon Bedrock. With this launch, Amazon Bedrock continues to develop mannequin selection by including entry to Qwen3 open weight basis fashions (FMs) in a full managed, serverless method. This launch consists of 4 fashions: Qwen3-Coder-480B-A35B-Instruct, Qwen3-Coder-30B-A3B-Instruct, Qwen3-235B-A22B-Instruct-2507, and Qwen3-32B (Dense). Collectively, these fashions function each mixture-of-experts (MoE) and dense architectures, offering versatile choices for various software necessities.
Amazon Bedrock offers entry to industry-leading FMs by a unified API with out requiring infrastructure administration. You possibly can entry fashions from a number of mannequin suppliers, combine fashions into your functions, and scale utilization based mostly on workload necessities. With Amazon Bedrock, buyer knowledge isn’t used to coach the underlying fashions. With the addition of Qwen3 fashions, Amazon Bedrock gives much more choices to be used instances like:
- Code technology and repository evaluation with prolonged context understanding
- Constructing agentic workflows that orchestrate a number of instruments and APIs for enterprise automation
- Balancing AI prices and efficiency utilizing hybrid considering modes for adaptive reasoning
Qwen3 fashions in Amazon Bedrock
These 4 Qwen3 fashions at the moment are obtainable in Amazon Bedrock, every optimized for various efficiency and price necessities:
- Qwen3-Coder-480B-A35B-Instruct – It is a mixture-of-experts (MoE) mannequin with 480B whole parameters and 35B energetic parameters. It’s optimized for coding and agentic duties and achieves robust leads to benchmarks resembling agentic coding, browser use, and power use. These capabilities make it appropriate for repository-scale code evaluation and multistep workflow automation.
- Qwen3-Coder-30B-A3B-Instruct – It is a MoE mannequin with 30B whole parameters and 3B energetic parameters. Particularly optimized for coding duties and instruction-following eventualities, this mannequin demonstrates robust efficiency in code technology, evaluation, and debugging throughout a number of programming languages.
- Qwen3-235B-A22B-Instruct-2507 – That is an instruction-tuned MoE mannequin with 235B whole parameters and 22B energetic parameters. It delivers aggressive efficiency throughout coding, math, and common reasoning duties, balancing functionality with effectivity.
- Qwen3-32B (Dense) – It is a dense mannequin with 32B parameters. It’s appropriate for real-time or resource-constrained environments resembling cell gadgets and edge computing deployments the place constant efficiency is essential.
Architectural and useful options in Qwen3
The Qwen3 fashions introduce a number of architectural and useful options:
MoE in contrast with dense architectures – MoE fashions resembling Qwen3-Coder-480B-A35B, Qwen3-Coder-30B-A3B-Instruct, and Qwen3-235B-A22B-Instruct-2507, activate solely a part of the parameters for every request, offering excessive efficiency with environment friendly inference. The dense Qwen3-32B prompts all parameters, providing extra constant and predictable efficiency.
Agentic capabilities – Qwen3 fashions can deal with multi-step reasoning and structured planning in a single mannequin invocation. They’ll generate outputs that decision exterior instruments or APIs when built-in into an agent framework. The fashions additionally preserve prolonged context throughout lengthy periods. As well as, they help device calling to permit standardized communication with exterior environments.
Hybrid considering modes – Qwen3 introduces a hybrid strategy to problem-solving, which helps two modes: considering and non-thinking. The considering mode applies step-by-step reasoning earlier than delivering the ultimate reply. That is ideally suited for advanced issues that require deeper thought. Whereas the non-thinking mode offers quick and near-instant responses for much less advanced duties the place pace is extra essential than depth. This helps builders handle efficiency and price trade-offs extra successfully.
Lengthy-context dealing with – The Qwen3-Coder fashions help prolonged context home windows, with as much as 256K tokens natively and as much as 1 million tokens with extrapolation strategies. This permits the mannequin to course of complete repositories, giant technical paperwork, or lengthy conversational histories inside a single process.
When to make use of every mannequin
The 4 Qwen3 fashions serve distinct use instances. Qwen3-Coder-480B-A35B-Instruct is designed for advanced software program engineering eventualities. It’s suited to superior code technology, long-context processing resembling repository-level evaluation, and integration with exterior instruments. Qwen3-Coder-30B-A3B-Instruct is especially efficient for duties resembling code completion, refactoring, and answering programming-related queries. When you want versatile efficiency throughout a number of domains, Qwen3-235B-A22B-Instruct-2507 gives a steadiness, delivering robust general-purpose reasoning and instruction-following capabilities whereas leveraging the effectivity benefits of its MoE structure. Qwen3-32B (Dense) is suitable for eventualities the place constant efficiency, low latency, and price optimization are essential.
Getting began with Qwen fashions in Amazon Bedrock
To start utilizing Qwen fashions, within the Amazon Bedrock console, I select Mannequin Entry from the Configure and be taught part of the navigation pane. I then navigate to the Qwen fashions to request entry. Within the Chat/Textual content Playground part of the navigation pane, I can rapidly check the brand new Qwen fashions with my prompts.
To combine Qwen3 fashions into my functions, I can use any AWS SDKs. The AWS SDKs embrace entry to the Amazon Bedrock InvokeModel and Converse API. I may use these mannequin with any agentic framework that helps Amazon Bedrock and deploy the brokers utilizing Amazon Bedrock AgentCore. For instance, right here’s the Python code of a easy agent with device entry constructed utilizing Strands Brokers:
from strands import Agent from strands_tools import calculator agent = Agent( mannequin="qwen.qwen3-coder-480b-instruct-v1:0", instruments=[calculator] ) agent("Inform me the sq. root of 42 ^ 9") with open("perform.py", 'r') as f: my_function_code = f.learn() agent(f"Assist me optimize this Python perform for higher efficiency:nn{my_function_code}")
Now obtainable
Qwen fashions can be found right now within the following AWS Areas:
- Qwen3-Coder-480B-A35B-Instruct is accessible within the US West (Oregon), Asia Pacific (Mumbai, Tokyo), and Europe (London, Stockholm) Areas.
- Qwen3-Coder-30B-A3B-Instruct, Qwen3-235B-A22B-Instruct-2507, and Qwen3-32B can be found within the US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Tokyo), Europe (Eire, London, Milan, Stockholm), and South America (São Paulo) Areas.
Examine the full Area record for future updates. You can begin testing and constructing instantly with out infrastructure setup or capability planning. To be taught extra, go to the Qwen in Amazon Bedrock product web page and the Amazon Bedrock pricing web page.
Strive Qwen fashions on the Amazon Bedrock console now, and supply suggestions by AWS re:Submit for Amazon Bedrock or your typical AWS Assist channels.
— Danilo

