Introduction
Products and their context change constantly.
Architectures must evolve with them.
Yet ensuring systems follow best practices remains largely manual.
The barriers to creating our own standards are too high, so we adopt generic industry ones.
Best practices live in long PDFs, written without your context.
Compliance becomes a checkbox exercise, replacing informed decisions with overly broad rules.
Reviews happen late, risking expensive mistakes and incidents.
All systems are now distributed, but best practice knowledge is not.
Teams run hybrid, multi-vendor systems - but our tools reason about them one property at a time.
Well-Architected 2 (WA2) is an architecture reasoning system, not a compliance scanner.
WA2 builds a graph of your system and evaluates it against your intent.
As you build or evolve architectures, WA2 guides you, explaining best practices, what they imply, and how their consequences ripple through your architecture.
Instead of asking:
- Have you backed up this S3 bucket?
WA2 determines:
- Are your critical stores protected from data loss?
What WA2 is
WA2 consists of:
- Book: this guide, explaining both the thinking and the tool.
- Intents language: a small language for expressing architectural policies.
- Framework: vendor-independent best practices built on architectural concepts.
- Tooling:
- CLI: enforcement in CI/CD.
- Extension: editor integration that guides you around problems as you build.
The Big Idea
WA2 separates:
- How a system is implemented
from
- What it must guarantee
Rules add evidence to a shared graph.
Policies evaluate that evidence.
Vendor-specific logic produces facts.
Architectural intent consumes them.
This keeps governance clean and portable.
And allows us to establish an evidence chain - back to source code.
Why This Matters
Architectures have grown far more complex.
Our tooling has not kept up.
WA2 changes how we think about architecture:
- Architecture becomes queryable.
- Best practices become executable.
- Governance becomes scalable.
- Vendor specifics become interchangeable.
- Developers get guidance in context.
Current Scope
Warning
WA2 in a work in progress, and should not be used for production workloads
As WA2 evolves we continue to discover things that leads to breaking changes, its very volatile.
It’s absolutely is missing features, lacks consistency, and has bugs - beware!
This book is updated in parallel with tooling and framework, so check back for progress.
- Today WA2 supports AWS CloudFormation (JSON & YAML).
- It is designed to support additional systems over time.
Getting Started
WA2 is designed to be small, so getting start reflect thats. What you will learn in this chapter:
- Installing WA2 on Linux, macOS, and Windows
- Writing a policy that checks a simple target system
- Adopting an Intent-native approach (todo)
- Installing and using the WA2 extension in VSCode (todo)
Installation
Installation
The first step is to install the WA2 cli1 intent. You’ll need an internet connection for the download. Since intent is a Rust binary it’s a single file to install, or delete.
Tip
WA2 must be trustworthy to be useful, so it is open source. You should follow your security policies, and trust but verify before installing. You can view the source code for the book2, installation script3, and tooling4.
The following steps install the latest release5 of WA2 intent, and this
book assumes that version.
Linux or macOS
Note
We are assuming you place developer binaries in
~/.local/bin. You can place theintentbinary wherever you want, but ensure it’s on your PATH.
If you’re using Linux or macOS, open a terminal and run:
curl -fsSL https://well.architected.to/install-intent.sh | sh
This script will:
- detect your platform and CPU architecture
- download the correct release from https://github.com/unremarkable-technology/tooling/releases
- install the intent binary to ~/.local/bin
After installation, verify:
intent --version
intent 0.2.0
If the command is not found, ensure ~/.local/bin is on your PATH.
Windows
WSL 2
Install using the same command as Linux:
curl -fsSL https://well.architected.to/install-intent.sh | sh
Verify:
intent --version
intent 0.2.0
If the command is not found, ensure ~/.local/bin is on your PATH.
PowerShell
Note
We are assuming you place developer executables in
%USERPROFILE%\bin. You can place theintent.exewherever you want, but ensure it’s on your PATH.
Download the latest release. The following command assumes you have PowerShell available:
iwr https://github.com/unremarkable-technology/tooling/releases/latest/download/intent-win32-x64.zip -OutFile intent.zip
Expand-Archive intent.zip -DestinationPath $env:USERPROFILE\bin -Force
Then verify:
intent --version
intent 0.2.0
After installation you should be able to run intent from any terminal.
-
Command Line Interface ↩
-
https://github.com/unremarkable-technology/book ↩
-
https://well.architected.to/install-intent.sh ↩
-
https://github.com/unremarkable-technology/tooling ↩
-
https://github.com/unremarkable-technology/tooling/releases/latest/ ↩
Hello, World!
With the WA2 intent binary installed, we can check our first system!
As is traditional in learning a new language, we open with the Hello, world! example,
of course we actually want something closer to:Success: target satisfies intent
but “Hello, Success: target satisfies intent” is not as catchy as “Hello, World!”
A target
We need a target system to check.
Our target will be this simple AWS CloudFormation template.
It creates a single S3 bucket.
AWSTemplateFormatVersion: "2010-09-09"
Resources:
DataBucket:
Type: AWS::S3::Bucket
We might assume that this bucket stores data because it is named DataBucket,
but that is just a guess at this point.
Write a test
We want to ask a question of the target, did you classify your data?
Let’s write a test to do that:
use aws:cfn
use data
// select which policies are active in this profile
profile example {
policy require_classification
}
// we require everything is given a classification
policy require_classification {
must all_cfn_rx_must_be_classified
}
// we need to know which cfn rx are critical
rule all_cfn_rx_must_be_classified {
// everything that is a AWS CloudFormation Resource
for resource in query(aws:cfn:Resource) {
// resource must have a data:Criticality fact attached
must query(resource/data:Criticality) {
subject: resource,
message: "Resource must be classified"
}
}
}
The code above is written in the intent language:
- we
usesome supporting namespaces for AWS CloudFormation and data classification - we create a
profileto group policies. - we define a
policydescribing whatmustbe satisfied. - finally the
rulethat is evaluated
When WA2 is asked to look at the target,
it automatically converts the CloudFormation into the WA2 graph.
So our rule can query(aws:cfn:Resource) to find all
CloudFormation resources in the graph.
Our rule also uses query(resource/data:Criticality) to
check if the resource has data Criticality evidence.
The must keyword is a modal verb (RFC 21191) that
tells WA2 how this rule is satisfied. In this case
we used must so what follows must be truthy (not empty, false, or 0).
Run the test
We can now use the CLI to check whether our target satisfies our intent:
intent check --profile example --target naive.yaml --entry naive.wa2
PREPARE
-------
✓ Read target naive.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry naive.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✗ Profile: example [0/1]
└─ ✗ Policy: naive:require_classification [0/1]
└─ ✗ must naive:all_cfn_rx_must_be_classified (1 finding)
└─ ✗ DataBucket
Location: naive.yaml: line 3
Message: Resource must be classified
VALIDATION
----------
✓ Validate CloudFormation against specification
We were looking for evidence of classification.
Right now, no such evidence exists.
We have not yet told WA2 how that evidence should be produced.
So the policy fails, correctly.
Note
There are three sections in the output
- PREPARE: to analyse by loading and parsing files
- RESULTS: show success or issues
- VALIDATION: of target Validation is done in parallel, and uses the CloudFormation Specification from AWS. Since validation against the specification takes time, we optimise the dev experience by running it in the background, hence why it appears last.
Add the
--novalidationparameter to disable validation for even faster execution.
You can view this as a Test-Driven Development (TDD2) approach:
- write a test
- see the test fail
- write the simplest code that helps it pass
- refactor as needed
We’ve done the first two steps already, so let’s write that simple code to help it pass.
Help it pass
Our rule looks for evidence of data classification.
We need to say how data classification is expressed in our CloudFormation implementation.
In CloudFormation, we normally do this with “AWS Tags”.
In our CloudFormation we plan to use a DataCriticality tag, so lets query for that.
We want this code to run every time WA2 tries to satisfy our intent.
We use the derive keyword to say it is going to add to the graph:
// a derive creates derived information
derive evidence_of_criticality_from_cfn_rx_tagging {
for resource in query(aws:cfn:Resource) {
let tag = query(resource/aws:Tags/*[aws:Key = "DataCriticality"])
// tagged? create a data:Criticality fact, and attach it to the resource
if tag {
let fact = add(_, wa2:type, data:Criticality)
add(resource, wa2:contains, fact)
}
}
}
Our intent code queries for all CloudFormation resources.
If a resource has an AWS Tag, and it has a DataCriticality key -
then we add data:Criticality evidence to that resource in the graph.
Fix the target
Update the target CloudFormation to include the classification tag:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
DataBucket:
Type: AWS::S3::Bucket
Properties:
Tags:
- Key: DataCriticality
Value: Important
Run the test (again)
Let’s check the target again:
intent check --profile example --target tagged.yaml --entry tagged.wa2
PREPARE
-------
✓ Read target tagged.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry tagged.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✓ Profile: example [1/1]
VALIDATION
----------
✓ Validate CloudFormation against specification
The policy is satisfied because the required architectural fact now exists.
What just happened?
When WA2 evaluates a system, it builds a graph representation of the architecture and reasons about it.
Your CloudFormation becomes nodes and relationships in the WA2 graph.
Rules and derives operate on that graph.
CloudFormation
↓
WA2 Graph
↓ ↑
derive → evidence
↓
rule
↓
policy
↓
profile
↓
evaluation result
derivestatements addevidenceto thegraphrulesevaluate that evidencepoliciesgrouprulesinto architectural requirements
Vendor-specific logic derives facts about the system.
Architectural policies evaluate those facts without depending on implementation details.
Peering at the graph
Sometimes its useful to look at the graph, which is displayed
as a containment tree with → to indicate non-containing edges:
intent check --profile example --target tagged.yaml --entry tagged.wa2 --graph
PREPARE
-------
✓ Read target tagged.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry tagged.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✓ Profile: example [1/1]
GRAPH
-----
core:workload : core:Workload
├─ -core:source-
│ └─ _:59 : aws:cfn:Template
│ ├─ -aws:cfn:pseudoParameters-
│ │ └─ _:60
│ │ ├─ AWS::AccountId : aws:cfn:PseudoParameter
│ │ ├─ AWS::NotificationARNs : aws:cfn:PseudoParameter
│ │ ├─ AWS::NoValue : aws:cfn:PseudoParameter
│ │ ├─ AWS::Partition : aws:cfn:PseudoParameter
│ │ ├─ AWS::Region : aws:cfn:PseudoParameter
│ │ ├─ AWS::StackId : aws:cfn:PseudoParameter
│ │ ├─ AWS::StackName : aws:cfn:PseudoParameter
│ │ └─ AWS::URLSuffix : aws:cfn:PseudoParameter
│ └─ -aws:cfn:resources-
│ └─ _:70
│ └─ DataBucket : aws:cfn:Resource
│ aws:type="AWS::S3::Bucket"
│ aws:logicalId="DataBucket"
│ ├─ -aws:Tags-
│ │ └─ _:72
│ │ └─ _:74
│ │ aws:Key="DataCriticality"
│ │ aws:Value="Important"
│ └─ _:78 : data:Criticality
└─ _:77 : core:Store
└─ -core:source- DataBucket : aws:cfn:Resource (→)
VALIDATION
----------
✓ Validate CloudFormation against specification
Refactor as needed
Our tests are green, but they carry technical debt:
- Our policy is tightly coupled to CloudFormation
- The evidence is weak; we are not validating the tag value
- It asks a compliance question: “did you do it?”, not “did you need to?”
Let’s address that in the next chapter.
-
RFC 2119 https://www.ietf.org/rfc/rfc2119.out ↩
-
https://en.wikipedia.org/wiki/Test-driven_development ↩
Intent driven
In this chapter we show how to refactor to remove our test’s technical debt:
- Our policy is very tied to CloudFormation
- It’s not robust evidence, we are not checking the contents of the tag
- It’s a compliance checkbox “did you do it?”, no “did you need to?”
Our first task is to remove the direct linkage to CloudFormation in our policy.
A higher vista
Let’s take a higher vista, and look at architecture at a higher level.
The WA2 Framework provides the core namespace which provides these key elements:
// Architectural node types
enum Node { Store, Run, Move }
struct Workload {
nodes: Node[]
}
struct Evidence {
value: String
}
The foundations of the core namespace (and indeed WA2) are these:
- We reason about
Nodes, which has three possible variations:StoredataRuncodeMoveinformation
- We arrange a set of
Nodein our graph into aWorkload - We use
Evidenceto enrich the graph
Projecting into our vista
As we saw in the previous chapter, the intent language allows us to write
queries at an AWS CloudFormation level:query(aws:cfn:Resource)
This is critical to be able to create evidence at a Vendor level, but we want to reason about architecture, not implementation.
The WA2 Framework provides the aws:cfn namespace which projects from
CloudFormation into the core:Node type. So for example in this snippet
we can see how it maps aws:type into Node:Store
derive stores {
let cfn_stores = query(aws:cfn:Resource[aws:type in (
"AWS::S3::Bucket",
"AWS::EC2::Volume",
"AWS::EFS::FileSystem"
⋮
)])
for s in cfn_stores {
let node = add(_, wa2:type, core:Store)
add(node, core:source, s)
add(core:workload, wa2:contains, node)
}
}
This means that if you add
use core
use aws:cfn
to your wa2 intent file, you automatically get these projections. This allows us to rewrite our policy rule without reference to AWS.
Policy independent of vendor
Now that we can work at a higher level, we can write policy that is vendor neutral. In the last chapter we were checking all CloudFormation Resources for data classification, which makes no sense for a AWS IP Address (for example). Now we can start with quering only stores:
// we require everything is given a classification
policy require_classification {
must all_stores_must_be_classified
}
// we need to know which cfn rx are critical
rule all_stores_must_be_classified {
let scope = query(core:Store)
for store in scope {
// reference the source of this store (will be a cfn resource)
let source = query(store/core:source)
must query(store/core:Evidence/data:Criticality) {
subject: source,
area: data:Criticality,
message: "Stores need to have criticality classification"
}
}
}
We use core:source to refer back to the
source of the Store - in a CloudFormation based workload, that will be
the Resource. Also note how we are now using core:Evidence to standardize where we
keep evidence facts.
So we derive the evidence from the CloudFormation level, and can build a rule
ontop of the evidence, not the CloudFormation implementation detail.
// a derive creates derived information
derive evidence_of_criticality_from_cfn_rx_tagging {
let stores = query(core:Store[core:source/aws:cfn:Resource])
for store in stores {
let source = query(store/core:source)
let dc_tag = query(source/aws:Tags/*[aws:Key = "DataCriticality"])
should dc_tag {
subject: source,
area: data:Criticality,
message: "Add a DataCriticality tag to this Resource"
}
let evidence = add(_, wa2:type, core:Evidence)
add(store, wa2:contains, evidence)
let fact = add(_, wa2:type, data:Criticality)
add(evidence, wa2:contains, fact)
}
}
Note again that we place facts under core:Evidence to meet our rule expectations.
Instead of using an if statement to check the exist of the tag, we now use a should modal.
The should (like the must) will stop the derive execution, preventing evidence from being added,
but instead of a fatal error, it will be a warning.
Tip
Using a
shouldin aderiveprovides guidance to an engineer that is relevant at the implementation level. Therulewill signal a fatal architectural error about the lack of classification, but thederivecan tell the engineer what needs to be fixed at the CloudFormation level.
Ensure all tests continue to pass
So now we can run again to ensure our refactoring has not broken anything:
Let’s check the target again:
intent check --profile example --target tagged.yaml --entry unvendor.wa2
PREPARE
-------
✓ Read target tagged.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry unvendor.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✓ Profile: example [1/1]
VALIDATION
----------
✓ Validate CloudFormation against specification
So we have fixed our first piece of debt, having policy to tied to implementation detail. Now as WA2 adds new ways to ingest targets (API etc), and new vendors (Azure, GCP) we won’t have to change our polict, we will just add new derives to gather the evidence we need.
Enforcing a taxonomy
Currently the tags against a Resource could contain any value. So we want to make sure they follow our Data Classification Taxonomy. Everyone has their own, so lets define ours and then make sure its being used.
So we can add a enum that lists all possible values, just like core did for Node.
enum DataCriticality {
Disposable,
NonCritical,
Important,
BusinessCritical,
MissionCritical
}
so we can write should query, with the as() function to convert the
Value of the AWS Tag into our enum DataCriticality
// a derive creates derived information
derive evidence_of_criticality_from_cfn_rx_tagging {
let stores = query(core:Store[core:source/aws:cfn:Resource])
for store in stores {
let source = query(store/core:source)
let dc_tag = query(source/aws:Tags/*[aws:Key = "DataCriticality"])
// is there a dc tag?
should dc_tag {
subject: source,
area: DataCriticality,
message: "Add a DataCriticality tag (aws:Tags/aws:Key = 'DataCriticality') to this Resource"
}
// is the dc tag value valid in taxonomy?
should query(dc_tag/aws:Value) as(DataCriticality) {
subject: source,
area: DataCriticality,
message: "DataCriticality tag must be a value from DataCriticality taxonomy"
}
let evidence = add(_, wa2:type, core:Evidence)
add(store, wa2:contains, evidence)
let fact = add(_, wa2:type, data:Criticality)
add(evidence, wa2:contains, fact)
}
}
Now we only derive evidence of Criticality if the tagging follows our taxonomy. In theory this also allows different projects to use different taxonomy, and our polict would still work.
Note
the
[modal] [value] as([name])syntax is truthy.
For our example if the value is not in the list of valid values in name, it evaluates to false. so since we usedshoulda non-valid value stops us adding evidence
Ensure all tests continue to pass
Let’s check the target again:
intent check --profile example --target tagged.yaml --entry taxonomy.wa2
PREPARE
-------
✓ Read target tagged.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry taxonomy.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✓ Profile: example [1/1]
VALIDATION
----------
✓ Validate CloudFormation against specification
Acting on Intent
So now we can step away from broad compliance tickboxes, and instead use our intent to decide what must be done. First we need another rule in our policy set:
// protect critical data, which we know through classification
policy protect_stores_based_on_classification {
must all_stores_must_be_classified
must ensure_critical_stores_are_protected
}
Critical stores should be resilient
We write the new rule that says that all critical stores must be resilient:
rule ensure_critical_stores_are_protected {
let scope = query(core:Store[core:Evidence/data:isCritical])
for store in scope {
let source = query(store/core:source)
must query(store/core:Evidence/data:isResilient) {
subject: source,
area: data:isResilient,
message: "Critical stores need to be protected from loss"
}
}
}
Identify which stores are Critical
We are going to add to our tagging logic to identify if a store is critical or not based on our taxonomy:
// a derive creates derived information
derive evidence_of_criticality_from_cfn_rx_tagging {
let stores = query(core:Store[core:source/aws:cfn:Resource])
for store in stores {
let source = query(store/core:source)
let dc_tag = query(source/aws:Tags/*[aws:Key = "DataCriticality"])
// is there a dc tag?
should dc_tag {
subject: source,
area: DataCriticality,
message: "Add a DataCriticality tag (aws:Tags/aws:Key = 'DataCriticality') to this Resource"
}
// is the dc tag value valid in taxonomy?
let criticality = query(dc_tag/aws:Value) as(DataCriticality)
should criticality {
subject: source,
area: DataCriticality,
message: "DataCriticality tag must be a value from DataCriticality taxonomy"
}
let evidence = add(_, wa2:type, core:Evidence)
add(store, wa2:contains, evidence)
let fact = add(_, wa2:type, data:Criticality)
add(evidence, wa2:contains, fact)
// do we consider it critical? non-named are assumed critical
let is_critical = match criticality {
Disposable, NonCritical, Important, => false,
else => true
}
// mark it critical
if is_critical {
let crit_fact = add(_, wa2:type, data:isCritical)
add(evidence, wa2:contains, crit_fact)
}
}
}
Tip
we use the
matchkeyword to return different values based on theenum.
Note how we flipped the logic, so that when we add a new value to the enum in the future, the rule we defensively protect us by assuming it is critical.
Gather evidence from implementation
Finally we gather evidence of resilience, in this example we just look for S3 buckets with replication setup:
derive store_resilience_from_s3_replication {
// https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-requirements.html
let replicated_stores = query(aws:cfn:Resource[aws:type = "AWS::S3::Bucket"][
aws:VersioningConfiguration/aws:Status = "Enabled"
][
aws:ReplicationConfiguration/aws:Role
][
aws:ReplicationConfiguration/aws:Rules/*/aws:Status = "Enabled"
][
aws:VersioningConfiguration/aws:Status = "Enabled"
]/core:Store)
for store in replicated_stores {
let evidence = add(_, wa2:type, core:Evidence)
add(store, wa2:contains, evidence)
let fact = add(_, wa2:type, data:isResilient)
add(evidence, wa2:contains, fact)
}
}
We need to update our target to make this critical for our example:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
DataBucket:
Type: AWS::S3::Bucket
Properties:
Tags:
- Key: DataCriticality
Value: MissionCritical
Let’s check the target again:
intent check --profile example --target protect.yaml --entry protect.wa2
PREPARE
-------
✓ Read target protect.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry protect.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✗ Profile: example [0/1]
└─ ✗ Policy: protect:protect_stores_based_on_classification [1/2]
└─ ✗ must protect:ensure_critical_stores_are_protected (1 finding)
└─ ✗ DataBucket
Location: protect.yaml: line 4
Area: data:isResilient
Message: Critical stores need to be protected from loss
VALIDATION
----------
✓ Validate CloudFormation against specification
So the result is telling use that there are critical stores that should be resilient but are not. That would be a very expensive mistake to make in production.
We need to update our target to make this store resilient. Getting
this right is not simple (and in this example is not complete!).
So this would be ideal to put in your standard
goverance (more later on this) set of derives:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
DataBucketName:
Type: String
DestinationBucketArn:
Type: String
DestinationAccountId:
Type: String
ReplicationRoleName:
Type: String
Default: s3-replication-role
Resources:
# IAM role assumed by S3 to perform cross-account replication
# kept minimal and service-scoped to avoid broader IAM surface
ReplicationRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Ref ReplicationRoleName
# allow the S3 service to assume this role
# no human or workload access
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: s3.amazonaws.com
Action: sts:AssumeRole
# Managed policy attached to the replication role
# split out to avoid inline IAM policies (guard requirement)
# permissions are tightly scoped to:
# - read versioned data from the source bucket
# - write replicated objects + deletes to the destination bucket
ReplicationPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: !Sub "${AWS::StackName}-s3-replication"
# attached only to the replication role
Roles:
- !Ref ReplicationRole
PolicyDocument:
Version: "2012-10-17"
Statement:
# allow S3 to read replication config and list source bucket
- Effect: Allow
Action:
- s3:GetReplicationConfiguration
- s3:ListBucket
Resource: !Sub "arn:${AWS::Partition}:s3:::${DataBucketName}"
# allow S3 to read all required object metadata + versions
# needed for correct replication of versioned + protected objects
- Effect: Allow
Action:
- s3:GetObjectVersionForReplication
- s3:GetObjectVersionAcl
- s3:GetObjectVersionTagging
- s3:GetObjectRetention
- s3:GetObjectLegalHold
Resource: !Sub "arn:${AWS::Partition}:s3:::${DataBucketName}/*"
# allow S3 to write replicated objects, deletes, and tags
# into the destination account bucket
- Effect: Allow
Action:
- s3:ReplicateObject
- s3:ReplicateDelete
- s3:ReplicateTags
- s3:ObjectOwnerOverrideToBucketOwner
Resource: !Sub "${DestinationBucketArn}/*"
DataBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref DataBucketName
# provide native undo for delete/overwrites, but ^cost
VersioningConfiguration:
Status: Enabled
# replicate the data bucket to another account
ReplicationConfiguration:
Role: !GetAtt ReplicationRole.Arn
Rules:
- Id: ReplicateAllToBackupAccount
Status: Enabled
DeleteMarkerReplication:
Status: Enabled
Destination:
Bucket: !Ref DestinationBucketArn
Account: !Ref DestinationAccountId
AccessControlTranslation:
Owner: Destination
Tags:
- Key: DataCriticality
Value: MissionCritical # major impact if we lose
Results
Now when we check the target, we see our intent is satisified:
intent check --profile example --target resilient.yaml --entry protect.wa2 --verbose
PREPARE
-------
✓ Read target resilient.yaml
• Schedule CloudFormation validation
Validation will run concurrently and report after results.
✓ Initialise kernel
✓ Parse intent entry protect.wa2
✓ Select profile example
✓ Run analysis
RESULTS
-------
✓ Profile: example [1/1]
└─ ✓ Policy: protect:protect_stores_based_on_classification [2/2]
├─ ✓ must protect:all_stores_must_be_classified
└─ ✓ must protect:ensure_critical_stores_are_protected
VALIDATION
----------
✓ Validate CloudFormation against specification
Tip
We used the
--verboseflag to show whats been evaluated in this check
We now have intent code:
// protect critical data, which we know through classification
policy protect_stores_based_on_classification {
must all_stores_must_be_classified
must ensure_critical_stores_are_protected
}
creating a policy that checks:
- are data stores classified accoring to our criticality taxonomy?
- Are your critical stores protected from data loss?
wit the benefits of:
- without writing policy against a vendor specific implementation
- having overly broad sweeping compliance requirements that are overkill
- noisy false alarms for resources that don’t need that level of protection
- losing sight of the architectural policy we are trying to encourage
- written in one small language, not a polygot of json, yaml, python etc
We have ~115 lines of intent code, but most of this would be standard across any target system you built, and later we will show how you can package up common elements into your own namespace.
But first, lets bring this capacbility into the home of engineers, our IDE.
Hello, World in IDE!
We can also use WA2 in VSCode.
Installation Launch VS Code Quick Open (Ctrl+P), paste the following command, and press enter:
ext install FigmentEngineLtd.wa2
Now when you open a CloudFormation YAML or JSON file it will be checked against intent.
(more instructions to follow on how to use)
Appendices
WA2 Intent Language Specification
Version: 0.1.18
Status: Draft
1. Overview
The WA2 Intent Language is a declarative domain-specific language for expressing architectural policies, validation rules, and derived knowledge over a graph-based model of infrastructure.
1.1 Purpose
WA2 enables:
- Classification: Deriving semantic meaning from infrastructure configuration
- Validation: Asserting architectural requirements
- Guidance: Providing actionable feedback with appropriate severity
1.2 Execution Phases
The language operates in two ordered phases:
- Derive Phase — Enrich the graph with computed facts (model building)
- Rule Phase — Evaluate conditions and emit findings (validation only)
Policies and profiles control which rules are active and how their outcomes affect overall policy success.
1.3 Design Principles
- Queries are the primary way to inspect the model
- Modal operators (
must/should/may) control both severity and control flow - Derives cannot fail; rules can
- Policies select and constrain rules; they don’t modify rule behavior
2. Lexical Structure
2.1 Character Set
Source files are UTF-8 encoded.
2.2 Whitespace and Comments
Whitespace ::= ' ' | '\t' | '\n' | '\r' .
LineComment ::= '//' [^\n]* '\n' .
BlockComment ::= '/*' .* '*/' .
Comments and whitespace are ignored except as token separators.
2.3 Keywords
namespace use type struct enum predicate
instance rule derive policy profile
must should may
let for in if else match as
query add true false empty
2.4 Identifiers
Ident ::= [a-zA-Z_][a-zA-Z0-9_]* .
QualifiedName ::= Ident (':' Ident)* .
Examples: foo, core:Store, aws:cfn:Resource
2.5 Literals
StringLiteral ::= '"' [^"]* '"' .
BoolLiteral ::= 'true' | 'false' .
2.6 Operators and Punctuation
{ } ( ) [ ] / * = , : => _
3. Grammar
3.1 Top-Level Items
File ::= Item* .
Item ::= NamespaceDecl
| UseDecl
| TypeDecl
| StructDecl
| EnumDecl
| PredicateDecl
| InstanceDecl
| RuleDecl
| DeriveDecl
| PolicyDecl
| ProfileDecl
| ProfileSelection .
3.2 Declarations
NamespaceDecl ::= 'namespace' Ident '{' Item* '}' .
UseDecl ::= 'use' QualifiedName .
TypeDecl ::= Annotation* 'type' Ident .
StructDecl ::= Annotation* 'struct' Ident '{' FieldDecl* '}' .
FieldDecl ::= Ident ':' TypeRef .
EnumDecl ::= Annotation* 'enum' Ident '{' VariantList '}' .
VariantList ::= Ident (',' Ident)* ','? .
PredicateDecl ::= 'predicate' Ident .
InstanceDecl ::= 'instance' QualifiedName ':' QualifiedName .
3.3 Rules and Derives
RuleDecl ::= 'rule' Ident '{' Statement* '}' .
DeriveDecl ::= 'derive' Ident '{' Statement* '}' .
3.4 Policies and Profiles
PolicyDecl ::= 'policy' Ident '{' PolicyBinding* '}' .
PolicyBinding ::= Modal QualifiedName .
ProfileDecl ::= 'profile' QualifiedName '{' ProfileItem* '}' .
ProfileItem ::= 'policy' QualifiedName .
ProfileSelection ::= 'profile' QualifiedName .
3.5 Statements
Statement ::= LetStatement
| ForStatement
| IfStatement
| ModalStatement
| AddStatement .
LetStatement ::= 'let' Ident '=' Expr .
ForStatement ::= 'for' Ident 'in' Expr '{' Statement* '}' .
IfStatement ::= 'if' Expr '{' Statement* '}' ('else' '{' Statement* '}')? .
ModalStatement ::= Modal Expr ModalMetadata? .
ModalMetadata ::= '{' MetadataItem (',' MetadataItem)* ','? '}' .
MetadataItem ::= 'subject' ':' Expr
| 'area' ':' QualifiedName
| 'message' ':' StringLiteral .
AddStatement ::= 'add' '(' Expr ',' QualifiedName ',' Expr ')' .
Modal ::= 'must' | 'should' | 'may' .
3.6 Expressions
Expr ::= PrimaryExpr AsExpr? .
PrimaryExpr ::= QueryExpr
| AddExpr
| MatchExpr
| EmptyExpr
| QualifiedName
| Ident
| StringLiteral
| BoolLiteral
| '_' .
AsExpr ::= 'as' '(' QualifiedName ')' .
QueryExpr ::= 'query' '(' QueryPath ')' .
QueryPath ::= QueryStep ('/' QueryStep)* .
QueryStep ::= NodeTest Predicate* .
NodeTest ::= QualifiedName | Ident | '*' .
Predicate ::= '[' PredicateExpr ']' .
PredicateExpr ::= QueryPath
| QueryPath '=' StringLiteral .
AddExpr ::= 'add' '(' Expr ',' QualifiedName ',' Expr ')' .
MatchExpr ::= 'match' Expr '{' MatchArm* '}' .
MatchArm ::= Pattern (',' Pattern)* '=>' Expr ','? .
Pattern ::= Ident | 'else' .
EmptyExpr ::= 'empty' '(' Expr ')' .
3.7 Annotations
Annotation ::= '@#' Ident '(' AnnotationArg (',' AnnotationArg)* ')' .
AnnotationArg ::= Ident '=' Literal .
4. Type System
4.1 Graph Model
The system operates on a directed graph of entities connected by predicates.
| Concept | Description |
|---|---|
| Entity | A node in the graph with a unique identity |
| Predicate | A named relationship between entities or from entity to literal |
| Triple | (Subject, Predicate, Object) where Object is Entity or Literal |
4.2 Built-in Types
| Type | Description |
|---|---|
wa2:Type | A type definition |
wa2:Predicate | A predicate definition |
wa2:Namespace | A namespace |
wa2:subTypeOf | Enum variant relationship |
wa2:type | Type assignment predicate |
wa2:contains | Containment relationship |
4.3 Enum Types
Enums define a closed set of valid values:
enum DataCriticality {
Disposable,
NonCritical,
Important,
BusinessCritical,
MissionCritical
}
Each variant becomes an entity with wa2:subTypeOf pointing to the enum type.
4.4 Evaluation Results
Expressions evaluate to one of:
| Result | Description |
|---|---|
Entity | Single entity reference |
Set | Zero or more entities |
Literal | String value |
Empty | Absence of value |
5. Semantics
5.1 Truthiness
A value is truthy if:
| Result | Truthy When |
|---|---|
Entity | Always |
Set | Non-empty |
Literal | Non-empty string and not "false" |
Empty | Never |
A value is falsy if not truthy.
5.2 Query Semantics
query(path/to/target)
- Traverses the graph following the path
- Returns a
Setof matching entities or literals - Empty set if no matches
Variable Binding: If the first path segment is an unqualified name that matches a bound variable, traversal starts from that entity:
let source = query(store/core:source)
let tags = query(source/aws:Tags) // starts from 'source'
Predicates: Filter results:
query(core:Store[core:Evidence/data:isCritical]) // stores with critical evidence
query(source/aws:Tags/*[aws:Key = "Environment"]) // tags with specific key
5.3 Modal Statements
Modal statements are the primary mechanism for expressing requirements.
must <expr> { subject: <expr>, area: <name>, message: <string> }
should <expr> { ... }
may <expr> { ... }
Evaluation:
- Evaluate expression
- If truthy → continue to next statement
- If falsy:
- Create finding with specified metadata
- Apply guard behavior based on modal
Modal Behavior Matrix:
| Modal | On Falsy | Guard | Severity |
|---|---|---|---|
must | Fail | Yes | Error |
should | Warn | Yes | Warning |
may | Pass | No | Info |
Guard Behavior: When guard applies, remaining statements in the current block are skipped. Outer scopes continue.
for store in stores {
should query(store/core:Evidence) {
message: "Store needs evidence"
}
// If above fails, this line is skipped for this store:
let evidence = query(store/core:Evidence)
add(evidence, wa2:contains, fact)
}
// Loop continues with next store
5.4 As-Conversion
The as(Type) operator validates and converts values:
let criticality = query(tag/aws:Value) as(DataCriticality)
Behavior:
- Evaluate inner expression to get literal value
- Check if value matches a variant of the target enum
- If valid → return the value
- If invalid → return
Empty
Type Not Found: If the target type does not exist, this is always an error regardless of context (framework bug).
5.5 Match Expressions
match <expr> {
Pattern1, Pattern2 => result1,
Pattern3 => result2,
else => default
}
- Evaluates expression to get a literal value
- Tests patterns in order
- Returns result of first matching arm
elsematches anything
5.6 Add Expressions
add(subject, predicate, object)
- Creates a triple in the graph
- Returns the subject entity
_as subject creates a blank node
let evidence = add(_, wa2:type, core:Evidence) // new blank node
add(store, wa2:contains, evidence) // link to store
5.7 Empty Check
empty(expr)
- Returns truthy (
"true") if expression is empty/falsy - Returns
Emptyif expression is non-empty/truthy
6. Execution Model
6.1 Phase Order
1. Load Model (e.g., CloudFormation projection)
2. Load Framework (types, predicates, derives, rules, policies)
3. Select Profile
4. Derive Phase (fixed-point, model building)
5. Rule Phase (sequential by policy order, validation only)
6. Collect Findings
6.2 Derive Phase
Purpose: Enrich the graph with computed facts.
Constraints:
| Allowed | Not Allowed |
|---|---|
add statements/expressions | must modal |
should, may modals | — |
Blank nodes (_) | — |
Execution:
- Runs to fixed-point (until no new facts are added)
- Order of derives does not matter (monotonic)
- Guards operate normally (
should/mayskip remaining statements in block on failure) - Findings from
shouldare collected as warnings
Rationale: Derives build the model; they cannot cause overall failure. A missing tag might prevent evidence creation, but that’s detected by rules.
6.3 Rule Phase
Purpose: Evaluate conditions and produce findings. Rules do not modify the model.
Constraints:
| Allowed | Not Allowed |
|---|---|
must, should, may modals | add statements/expressions |
| Queries | Blank nodes (_) |
Execution:
- Model is stable (derives have completed)
- Only rules referenced by the selected profile’s policies are executed
- Rules execute in policy declaration order
- Modal statements evaluate immediately
Rationale: Rules validate a complete model. Since derives run first, all computed facts are available when rules execute.
6.4 Policy and Profile
Profile: Selects which policies are active.
profile production {
policy data_protection
policy compliance_checks
}
Policy: Binds rules with execution modals that control sequential flow.
policy data_protection {
must all_stores_classified
must critical_stores_protected
should encryption_enabled
}
Two-Phase Execution:
- Derive phase completes first (model building)
- Rule phase executes rules in policy order
Policy Modal Semantics:
The policy modal controls whether execution continues to the next rule based on whether the current rule produced any Error-level findings:
| Policy Modal | Rule Outcome | Effect |
|---|---|---|
must | Has Errors | Stop policy, report Fail |
must | No Errors | Continue to next rule |
should | Has Errors | Note degraded, continue |
should | No Errors | Continue to next rule |
may | Any | Always continue |
A rule “passes” if it produces no Error-level findings (warnings and info are acceptable).
Policy Outcomes:
| Outcome | Meaning |
|---|---|
Pass | All rules passed |
Degraded | All must rules passed, but some should rules failed |
Fail | At least one must rule failed |
Example:
policy protect_stores_based_on_classification {
must all_stores_must_be_classified // stops if this produces Errors
must ensure_critical_stores_protected // only runs if above passed
}
Per-Entity Dependencies: Handled naturally through evidence model. A rule checking for protection evidence will only match stores that have classification evidence, because the derive that creates protection evidence depends on classification evidence existing.
6.5 Fixed-Point Iteration
Derives execute in a fixed-point loop:
repeat until no change:
for each derive:
execute body
track (derive, binding) to avoid reprocessing same entity
Maximum iterations are bounded to prevent infinite loops.
Rules do not use fixed-point iteration; they execute once per entity in policy order.
6.6 Namespace Resolution
- Unqualified names inside
namespace X { ... }resolve toX:name usestatements import namespaces for reference- Type references in
as(Type)are qualified by current namespace if unqualified
7. Findings
7.1 Structure
A finding consists of:
| Field | Type | Description |
|---|---|---|
subject | Entity | The entity this finding relates to |
area | Entity | The type/category for educational content |
message | String | Human-readable action to resolve |
severity | Enum | core:Error, core:Warning, core:Info |
assertion | String | Rule name and modal that produced this |
7.2 Production
Findings are produced only by:
- Modal statements (
must,should,may)
as(Type) produces no findings; it returns Empty on invalid values.
7.3 Severity Mapping
| Modal | Severity Entity |
|---|---|
must | core:Error |
should | core:Warning |
may | core:Info |
8. Examples
8.1 Complete Example
use core
use aws:cfn
use data
// Define classification taxonomy
enum DataCriticality {
Disposable,
NonCritical,
Important,
BusinessCritical,
MissionCritical
}
// Activate policies
profile production {
policy protect_critical_data
}
// Define policy requirements
policy protect_critical_data {
must all_stores_classified
must critical_stores_protected
}
// Rule: all stores need classification
rule all_stores_classified {
for store in query(core:Store) {
let source = query(store/core:source)
must query(store/core:Evidence/data:Criticality) {
subject: source,
area: data:Criticality,
message: "Store must have criticality classification"
}
}
}
// Rule: critical stores need protection
rule critical_stores_protected {
for store in query(core:Store[core:Evidence/data:isCritical]) {
let source = query(store/core:source)
must query(store/core:Evidence/data:isResilient) {
subject: source,
area: data:isResilient,
message: "Critical stores must be protected from loss"
}
}
}
// Derive: extract classification from tags
derive classification_from_tags {
for store in query(core:Store[core:source/aws:cfn:Resource]) {
let source = query(store/core:source)
let dc_tag = query(source/aws:Tags/*[aws:Key = "DataCriticality"])
should dc_tag {
subject: source,
area: DataCriticality,
message: "Add a DataCriticality tag to this resource"
}
let criticality = query(dc_tag/aws:Value) as(DataCriticality)
should criticality {
subject: source,
area: DataCriticality,
message: "DataCriticality tag must be a valid classification"
}
// Create evidence
let evidence = add(_, wa2:type, core:Evidence)
add(store, wa2:contains, evidence)
let fact = add(_, wa2:type, data:Criticality)
add(evidence, wa2:contains, fact)
// Determine if critical
let is_critical = match criticality {
Disposable, NonCritical, Important => false,
else => true
}
// Mark as critical if applicable
if is_critical {
let crit_fact = add(_, wa2:type, data:isCritical)
add(evidence, wa2:contains, crit_fact)
}
}
}
8.2 Guard Behavior Example
derive example {
for item in query(core:Item) {
// If this fails, remaining statements for this item are skipped
should query(item/core:required_field) {
message: "Item needs required_field"
}
// Only reached if above passes
let value = query(item/core:required_field)
add(item, core:processed, value)
}
// Loop continues with next item regardless of guard
}
8.3 As-Conversion Examples
Standalone validation (returns value or Empty):
let criticality = query(tag/aws:Value) as(DataCriticality)
should criticality {
message: "Tag value must be valid"
}
9. Reserved for Future
The following features are not yet implemented but are under consideration:
- Aggregation functions (
count,sum,all,any) - Arithmetic expressions
- Rule return values and policy constraints on outcomes
- Negation in queries (
not) - Optional chaining in queries
- Import/export between files
- Macros and code generation
10. References
10.1 Grammar Summary
Whitespace ::= ' ' | '\t' | '\n' | '\r' .
LineComment ::= '//' [^\n]* '\n' .
BlockComment ::= '/*' .* '*/' .
Ident ::= [a-zA-Z_][a-zA-Z0-9_]* .
QualifiedName ::= Ident (':' Ident)* .
StringLiteral ::= '"' [^"]* '"' .
BoolLiteral ::= 'true' | 'false' .
File ::= Item* .
Item ::= NamespaceDecl
| UseDecl
| TypeDecl
| StructDecl
| EnumDecl
| PredicateDecl
| InstanceDecl
| RuleDecl
| DeriveDecl
| PolicyDecl
| ProfileDecl
| ProfileSelection .
NamespaceDecl ::= 'namespace' Ident '{' Item* '}' .
UseDecl ::= 'use' QualifiedName .
TypeDecl ::= Annotation* 'type' Ident .
StructDecl ::= Annotation* 'struct' Ident '{' FieldDecl* '}' .
FieldDecl ::= Ident ':' TypeRef .
EnumDecl ::= Annotation* 'enum' Ident '{' VariantList '}' .
VariantList ::= Ident (',' Ident)* ','? .
PredicateDecl ::= 'predicate' Ident .
InstanceDecl ::= 'instance' QualifiedName ':' QualifiedName .
RuleDecl ::= 'rule' Ident '{' Statement* '}' .
DeriveDecl ::= 'derive' Ident '{' Statement* '}' .
PolicyDecl ::= 'policy' Ident '{' PolicyBinding* '}' .
PolicyBinding ::= Modal QualifiedName .
ProfileDecl ::= 'profile' QualifiedName '{' ProfileItem* '}' .
ProfileItem ::= 'policy' QualifiedName .
ProfileSelection ::= 'profile' QualifiedName .
Statement ::= LetStatement
| ForStatement
| IfStatement
| ModalStatement
| AddStatement .
LetStatement ::= 'let' Ident '=' Expr .
ForStatement ::= 'for' Ident 'in' Expr '{' Statement* '}' .
IfStatement ::= 'if' Expr '{' Statement* '}' ('else' '{' Statement* '}')? .
ModalStatement ::= Modal Expr ModalMetadata? .
ModalMetadata ::= '{' MetadataItem (',' MetadataItem)* ','? '}' .
MetadataItem ::= 'subject' ':' Expr
| 'area' ':' QualifiedName
| 'message' ':' StringLiteral .
AddStatement ::= 'add' '(' Expr ',' QualifiedName ',' Expr ')' .
Modal ::= 'must' | 'should' | 'may' .
Expr ::= PrimaryExpr AsExpr? .
PrimaryExpr ::= QueryExpr
| AddExpr
| MatchExpr
| EmptyExpr
| QualifiedName
| Ident
| StringLiteral
| BoolLiteral
| '_' .
AsExpr ::= 'as' '(' QualifiedName ')' .
QueryExpr ::= 'query' '(' QueryPath ')' .
QueryPath ::= QueryStep ('/' QueryStep)* .
QueryStep ::= NodeTest Predicate* .
NodeTest ::= QualifiedName | Ident | '*' .
Predicate ::= '[' PredicateExpr ']' .
PredicateExpr ::= QueryPath
| QueryPath '=' StringLiteral .
AddExpr ::= 'add' '(' Expr ',' QualifiedName ',' Expr ')' .
MatchExpr ::= 'match' Expr '{' MatchArm* '}' .
MatchArm ::= Pattern (',' Pattern)* '=>' Expr ','? .
Pattern ::= Ident | 'else' .
EmptyExpr ::= 'empty' '(' Expr ')' .
Annotation ::= '@#' Ident '(' AnnotationArg (',' AnnotationArg)* ')' .
AnnotationArg ::= Ident '=' Literal .
10.2 Severity Reference
| Context | Modal | Finding Produced | Severity | Guard |
|---|---|---|---|---|
| Modal statement | must | Yes | Error | Yes |
| Modal statement | should | Yes | Warning | Yes |
| Modal statement | may | Yes | Info | No |
expr as(T) | invalid | No | — | Return Empty |
expr as(T) | type not found | Always Error | — | — |
10.3 Built-in Predicates
| Predicate | Domain | Range | Description |
|---|---|---|---|
wa2:type | Entity | Type | Assigns type to entity |
wa2:subTypeOf | Type | Type | Enum variant relationship |
wa2:contains | Entity | Entity | Containment/child relationship |
core:source | Node | Resource | Links derived node to source |
core:subject | Finding | Entity | Entity the finding relates to |
core:area | Finding | Type | Category for educational content |
core:message | Finding | Literal | Human-readable guidance |
core:severity | Finding | Severity | Error/Warning/Info |
core:assertion | Finding | Literal | Rule and modal that produced finding |
10.4 Item Type Constraints
| Item | Queries | Add | Allowed Modals | Creates |
|---|---|---|---|---|
derive | ✓ | ✓ | should, may | Warning, Info |
rule | ✓ | ✗ | must, should, may | Error, Warning, Info |
These constraints are enforced at compile time (lowering phase).