Projects STRLCPY graphql-engine Commits e27e5b7f
🤬
  • v3: open-source hasura v3 engine

    Hasura V3 Engine v3.alpha.12-19-2023
    V3-GitOrigin-RevId: 6605575a52b347b5e9a14ecd1cc736f113c663b3
    
    PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10567
    Co-authored-by: Vishnu Bharathi <[email protected]>
    GitOrigin-RevId: 38c98a4b1971efe3ac724c2371c43ceb7d31f140
  • Loading...
  • Manas Agarwal committed with hasura-bot 5 months ago
    e27e5b7f
    1 parent 10781221
Showing first 44 files as there are too many
  • ■ ■ ■ ■ ■ ■
    v3/.cargo/config.toml
     1 +[net]
     2 +git-fetch-with-cli = true
     3 + 
  • ■ ■ ■ ■ ■ ■
    v3/.dockerignore
     1 +.git
     2 +target
  • ■ ■ ■ ■ ■ ■
    v3/.gitignore
     1 +# Generated by Cargo
     2 +# will have compiled files and executables
     3 +debug/
     4 +target/
     5 + 
     6 +.cargo/*
     7 +!.cargo/config.toml
     8 + 
     9 +# These are backup files generated by rustfmt
     10 +**/*.rs.bk
     11 + 
     12 +# MSVC Windows builds of rustc generate these, which store debugging information
     13 +*.pdb
     14 + 
     15 +rusty-tags.vi
     16 + 
     17 +# place to keep untracked, helper files for local dev
     18 +ws/
     19 + 
     20 +**/*.profraw
     21 +coverage/
     22 + 
  • ■ ■ ■ ■ ■ ■
    v3/CONTRIBUTING.md
     1 +# V3 Contributing Guide
     2 + 
     3 +## Getting Started
     4 + 
     5 +### Using Docker
     6 + 
     7 +Start a development container (which includes NDC agents for testing):
     8 + 
     9 +```
     10 +docker compose run --build --rm dev_setup bash
     11 +```
     12 + 
     13 +### Without Docker
     14 + 
     15 +You will need to install some packages:
     16 + 
     17 +- The Rust compiler
     18 +- `protobuf-compiler`
     19 + 
     20 +For development, you may need to install some additional tools such as `nextest`. See the [Dockerfile](Dockerfile).
     21 + 
     22 +### Building the source
     23 + 
     24 +If the dependencies are correctly installed as above, you should now be able to run
     25 + 
     26 +```
     27 +cargo build
     28 +```
     29 + 
     30 +From here, you can follow the instructions in <README.md> to set up a working server.
     31 + 
     32 +## Design Principles
     33 + 
     34 +### Separation of concerns: Open DDS vs NDC
     35 + 
     36 +NDC (formerly known as GDC) was introduced in v2, primarily as a way to improve development speed of new backends, but it also had several ancillary benefits which can be attributed to the separation of concerns between the NDC agent and the API engine.
     37 + 
     38 +In v3, we will work exclusively against the NDC abstraction to access data in databases.
     39 + 
     40 +There will be a separation between Open DDS (the metadata which the user provides to describe the their data models and APIs), the v3 engine which implements the specification (as GDS), and NDC which provides access to data.
     41 + 
     42 +### Server should start reliably and instantly
     43 + 
     44 +A major problem in v2 was that the construction of a schema from the user’s metadata was slow, and could fail for several reasons (e.g. a database might have been unavailable). This meant that the server could fail to come back up after a restart, or replicas could end up with subtly different versions of the metadata.
     45 + 
     46 +In v3, the schema will be completely and uniquely determined by the Open DDS metadata.
     47 + 
     48 +NDC can be unavailable, or its schema differ from what is in the Open DDS metadata. These are fine, because the schema is determined only by the Open DDS metadata.
     49 + 
     50 +In fact, it is useful to allow these cases, because they will allow different deployment workflows in which the Open DDS metadata is updated before a database migration, for example.
     51 + 
     52 +### Open DDS: configuration over convention
     53 + 
     54 +In v2, there were several conventions baked into the construction of the schema from metadata.
     55 + 
     56 +E.g. all table root fields were named after the database table by default, or could be renamed after the fact in metadata. However, this meant that we had to prefix table names when we added new databases, in case their default names overlapped.
     57 + 
     58 +Several other type names and root field names have defaults in v2 metadata.
     59 + 
     60 +V3 adopts the principle that Open DDS metadata will be explicit about everything needed to determine the schema, so that no overlaps can occur if the data connector schema changes.
     61 + 
     62 +Open DDS metadata in general will favor configuration over convention everywhere, and any conventions that we want to add to improve the user experience should be implemented in the CLI or console instead.
     63 + 
     64 +## Server development guide
     65 + 
     66 +The most important parts of the code from a server point of view are illustrated here. Explanations of each are given below:
     67 + 
     68 +```
     69 +crates
     70 +├── engine
     71 +│   ├── bin
     72 +│   │   ├── engine
     73 +│   ├── src
     74 +│   │   ├── execute
     75 +│   │   ├── metadata
     76 +│   │   ├── schema
     77 +│   │   │   ├── operations
     78 +│   │   │   ├── types
     79 +├── lang-graphql
     80 +│   ├── src
     81 +│   │   ├── ast
     82 +│   │   ├── introspection
     83 +│   │   ├── lexer
     84 +│   │   ├── normalized_ast
     85 +│   │   ├── parser
     86 +│   │   ├── schema
     87 +│   │   ├── validation
     88 +├── open-dds
     89 +```
     90 + 
     91 +### `engine/bin/engine`
     92 + 
     93 +This executable takes in a metadata file and starts the v3 engine according to that file.
     94 + 
     95 +### `open-dds`
     96 + 
     97 +This crate contains the Open DDS metadata structure and an accessor library. This metadata is used to start the v3 engine.
     98 + 
     99 +### `engine`
     100 + 
     101 +This crate implements the Open DDS specification on top of the GraphQL primitives provided by the `lang-graphql` crate. It is responsible for validating Open DDS metadata, creating a GraphQL schema from resolved Open DDS metadata, and implementing the GraphQL operations.
     102 + 
     103 +#### `engine/src/metadata`
     104 + 
     105 +Resolves/validates the input Open DDS metadata and creates intermedaiate structures that are using in the rest of the crate for schema generation.
     106 + 
     107 +#### `engine/src/schema`
     108 + 
     109 +Provides functions to resolve the Open DDS metadata, generate the GraphQL scehma from it, and execute queries against the schema.
     110 + 
     111 +#### `engine/src/schema/operations`
     112 + 
     113 +Contains the logic to define and execute the operations that would be defined by the Open DDS spec.
     114 + 
     115 +Technically, these are fields of the `query_root`/`subscription_root` or `mutation_root` and as such can be defined in `schema::types::*_root` module. However, this separation makes it easier to organize them (for example `subscription_root` can also import the same set of operations).
     116 + 
     117 +Each module under `operations` would roughly define the following:
     118 + 
     119 +- IR: To capture the specified operation.
     120 +- Logic to generate schema for the given operation using data from resolved metadata.
     121 +- Logic to parse a normalized field from the request into the defined IR format.
     122 +- Logic to execute the operation.
     123 + 
     124 +#### `engine/src/schema/types`
     125 + 
     126 +TODO: This is a bit outdated, so we should fix this.
     127 + 
     128 +Contains one module for each GraphQL type that we generate in the schema. For example:
     129 + 
     130 +- `model_selection`: An object type for selecting fields from a table (eg: type of the table_by_pk field in query_root).
     131 +- `query_root`: An object type that represents the entry point for all queries.
     132 + 
     133 +Each module under `types` defines the following:
     134 + 
     135 +- IR: A container for a value of the type.
     136 +- Logic to generate schema for the given type using data from resolved metadata.
     137 +- Logic to parse a normalized object (selection set or input value) from the request into the defined IR format.
     138 + 
     139 +### `lang-graphql`
     140 + 
     141 +This crate is an implementation of the GraphQL specification in Rust. It provides types for the GraphQL AST, implements the lexer and parser, as well as validation and introspection operations.
     142 + 
     143 +#### `lang-graphql/src/ast`
     144 + 
     145 +The raw GraphQL AST (abstract syntax tree) types that are emitted by the parser.
     146 + 
     147 +#### `lang-graphql/src/introspection`
     148 + 
     149 +Provides schema and type introspection for GraphQL schemas.
     150 + 
     151 +#### `lang-graphql/src/lexer`
     152 + 
     153 +Lexer that emits tokens (eg: String, Number, Punctuation) for a raw GraphQL document string.
     154 + 
     155 +#### `lang-graphql/src/normalized_ast`
     156 + 
     157 +The normalized AST types. The raw AST can be validated and elaborated with respect to a GraphQL schema, producing the normalized AST.
     158 + 
     159 +#### `lang-graphql/src/parser`
     160 + 
     161 +Parser for GraphQL documents (executable operations and schema documents).
     162 + 
     163 +#### `lang-graphql/src/schema`
     164 + 
     165 +Types to define a GraphQL schema.
     166 + 
     167 +#### `lang-graphql/src/validation`
     168 + 
     169 +Validates GraphQL requests vs a schema, and produces normalized ASTs, which contain additional relevant data from the schema.
     170 + 
  • v3/Cargo.lock
    Diff is too large to be displayed.
  • ■ ■ ■ ■ ■ ■
    v3/Cargo.toml
     1 +[workspace]
     2 +resolver = "2"
     3 +members = [
     4 + "tracing-util",
     5 + "lang-graphql",
     6 + "open-dds",
     7 + "engine",
     8 + "hasura-authn-core",
     9 + "hasura-authn-webhook",
     10 + "hasura-authn-jwt",
     11 + "custom-connector"
     12 +]
     13 + 
     14 +[profile.release]
     15 +debug = true
     16 + 
     17 +[profile.bench]
     18 +debug = true
     19 + 
  • ■ ■ ■ ■ ■ ■
    v3/Dockerfile
     1 +# See https://github.com/LukeMathWalker/cargo-chef
     2 +FROM rust:1.72.0 as chef
     3 + 
     4 +WORKDIR app
     5 + 
     6 +RUN apt-get update \
     7 + && DEBIAN_FRONTEND=noninteractive \
     8 + apt-get install --no-install-recommends --assume-yes \
     9 + lld protobuf-compiler libssl-dev ssh git pkg-config curl jq
     10 + 
     11 +ENV CARGO_HOME=/app/.cargo
     12 + 
     13 +RUN cargo install cargo-chef just grcov critcmp cargo-nextest
     14 + 
     15 +RUN rustup component add clippy
     16 +RUN rustup component add rustfmt
     17 +# needed for coverage reporting
     18 +RUN rustup component add llvm-tools-preview
     19 + 
     20 +RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
     21 + 
     22 +###
     23 +# Plan recipe
     24 +FROM chef AS planner
     25 + 
     26 +ENV CARGO_HOME=/app/.cargo
     27 +ENV RUSTFLAGS="-C link-arg=-fuse-ld=lld"
     28 + 
     29 +COPY . .
     30 +RUN --mount=type=ssh cargo chef prepare --recipe-path recipe.json
     31 + 
     32 +###
     33 +# Build recipe
     34 +FROM chef AS builder
     35 + 
     36 +# Use lld
     37 +ENV CARGO_HOME=/app/.cargo
     38 +ENV PATH="$PATH:$CARGO_HOME/bin"
     39 +ENV RUSTFLAGS="-C link-arg=-fuse-ld=lld"
     40 + 
     41 +COPY --from=planner /app/recipe.json recipe.json
     42 +COPY --from=planner /app/.cargo/config.toml /app/.cargo/config.toml
     43 + 
     44 +# Build dependencies - this is the caching Docker layer!
     45 +RUN --mount=type=ssh cargo chef cook --release --all-targets --recipe-path recipe.json
     46 +RUN --mount=type=ssh cargo chef cook --all-targets --recipe-path recipe.json
     47 + 
     48 +# Copies the source after building dependencies to allow caching
     49 +COPY . .
     50 + 
     51 +###
     52 +# Builds the application
     53 +FROM builder AS built
     54 +# Build the app
     55 +RUN cargo build --release
     56 + 
     57 +###
     58 +# Ship the app in an image with `curl` and very little else
     59 +FROM ubuntu:jammy
     60 + 
     61 +# Install `curl` for health checks
     62 +RUN set -ex; \
     63 + apt-get update -q; \
     64 + apt-get install -q -y curl
     65 + 
     66 +# Install the engine
     67 +COPY --from=built /app/target/release/engine /usr/local/bin
     68 +ENTRYPOINT ["engine"]
     69 + 
  • ■ ■ ■ ■ ■ ■
    v3/README.md
     1 +# Hasura GraphQL Engine V3
     2 + 
     3 +[![Docs](https://img.shields.io/badge/docs-v3.x-brightgreen.svg?style=flat)](https://hasura.io/docs/3.0/index/)
     4 + 
     5 +Hasura V3 is the API execution engine, based over the Open Data Domain Specification (OpenDD spec) and Native Data Connector Specifications (NDC spec), which powers the Hasura Data Delivery Network (DDN). The engine expects to run against an OpenDDS metadata file and exposes a GraphQL endpoint according to the specified metadata. The engine needs a data connector to run alongside, for the execution of data source specific queries.
     6 + 
     7 +## Data connectors
     8 + 
     9 +Hasura V3 engine does not execute queries directly - instead it sends IR (Abstracted, intermediate query) to NDC agents (aka data connectors). To run queries on a database, we'll need to run the data connector that supports the database.
     10 + 
     11 +Available Data connectors are listed at the [Connector Hub](https://hasura.io/connectors)
     12 + 
     13 +For local development, we use the reference agent implementation that is a part of the [NDC spec](https://github.com/hasura/ndc-spec).
     14 + 
     15 +To start the reference agent only, you can do:
     16 +```sh
     17 +docker-compose up reference_agent
     18 +```
     19 +and point the host name `reference_agent` to localhost in your `/etc/hosts` file.
     20 + 
     21 +## Run V3 engine (with reference agent)
     22 + 
     23 +### Using `cargo`
     24 +Hasura V3 engine is written in rust, hence `cargo` is required to build and run V3 engine locally.
     25 + 
     26 +To start the v3 engine locally, we need a `metadata.json` file and an auth config file.
     27 + 
     28 +Following are steps to run V3 engine with a reference agent (read only, in memory, relational database with sample tables), and an sample metadata file, exposing a fixed GraphQL schema. This can be used to understand the build setup and the new V3 engine concepts.
     29 + 
     30 + ```sh
     31 + RUST_LOG=DEBUG cargo run --release --bin engine -- \
     32 + --metadata-path open-dds/examples/reference.json \
     33 + --authn-config-path auth_config.json
     34 + ```
     35 + 
     36 +A dev webhook implementation is provided in `hasura-authn-webhook/dev-auth-webhook`,
     37 +that exposes the `POST /validate-request` which accepts converts the headers present
     38 +in the incoming request to a object containing session variables, note that only headers
     39 +that start with `x-hasura-` will be returned in the response.
     40 + 
     41 +The dev webhook can be run using the following command:
     42 + 
     43 +```sh
     44 +docker compose up auth_hook
     45 +```
     46 +and point the host name `auth_hook` to localhost in your `/etc/hosts` file.
     47 + 
     48 +Open <http://localhost:3000> for GraphiQL.
     49 + 
     50 +Use `--port` option to start v3-engine on a different port.
     51 +```sh
     52 +RUST_LOG=DEBUG cargo run --release --bin engine -- \
     53 + --port 8000 --metadata-path open-dds/examples/reference.json
     54 +```
     55 +Now, open <http://localhost:8000> for GraphiQL.
     56 + 
     57 +### With docker
     58 +You can also start Hasura V3 engine, reference_agent, dev Authentication webhook and jaegar for
     59 +tracing (accessible at localhost:4002) using docker (without the need of using `cargo`)
     60 + 
     61 +```sh
     62 +METADATA_PATH=open-dds/examples/reference.json AUTHN_CONFIG_PATH=auth_config.json docker compose up
     63 +```
     64 + 
     65 +## Run V3 engine (with Postgres)
     66 +[NDC Postgres](https://github.com/hasura/ndc-postgres) is the official connector by Hasura for Postgres Database. For running V3 engine for GraphQL API on Postgres, you need to run NDC Postgres Connector and have a `metadata.json` file that is authored specifically for your Postgres database and models (tables, views, functions).
     67 + 
     68 +The recommended way to author `metadata.json` for Postgres, is via Hasura DDN.
     69 + 
     70 +Follow the [Hasura DDN Guide](https://hasura.io/docs/3.0/getting-started/overview/) to create a Hasura DDN project, connect your cloud or local Postgres Database (Hasura DDN provides a secure tunnel mechanism to connect your local database easily), and model your GraphQL API. You can then download the authored metadata.json and use the following steps to run GraphQL API on your local Hasura V3 engine.
     71 + 
     72 +### Steps to run metadata with V3 engine locally
     73 + 
     74 +1. Download metadata from DDN project, using Hasura V3 CLI
     75 + ```sh
     76 + hasura3 build create --dry-run > ddn-metadata.json
     77 + ```
     78 + 
     79 +2. Following steps are to generate Postgres metadata object and run the Postgres Connector. These steps refer to the [NDC Postgres](https://github.com/hasura/ndc-postgres) repository:
     80 + 
     81 + 1. Start the Postgres connector in configuration mode (Config server). A config server provides additional endpoints for database instrospection and provide the schema of the database. Output of the config server will form the Postgres Metadata object.
     82 +
     83 + 2. Run the following command in the [ndc-postgres](https://github.com/hasura/ndc-postgres) repository:
     84 +
     85 + ```bash
     86 + just run-config
     87 + ```
     88 + 
     89 + 3. Generate the postgres configuration using the `new-configuration.sh` script by running the following
     90 + command (in another terminal) in the [ndc-postgres](https://github.com/hasura/ndc-postgres) repository:
     91 + 
     92 + ```bash
     93 + ./scripts/new-configuration.sh localhost:9100 '<postgres database url>' > pg-config.json
     94 + ```
     95 + 4. Now shutdown the postgres config server and start the Postgres Connector using the `pg-config.json` generated in the above
     96 + step, by running the following command:
     97 +
     98 + Please specify different `PORT` for different data connectors:
     99 + 
     100 + ```bash
     101 + PORT=8100 \
     102 + RUST_LOG=INFO \
     103 + cargo run --bin ndc-postgres --release -- serve --configuration pg-config.json > /tmp/ndc-postgres.log
     104 + ```
     105 + 5. Fetch the schema for the data connector object by running the following command:
     106 + ```bash
     107 + curl -X GET http://localhost:8100/schema | jq . > pg-schema.json
     108 + ```
     109 + 6. Finally, generate the `DataConnector` object:
     110 + ```bash
     111 + jq --null-input --arg name 'default' --arg port '8100' --slurpfile schema pg-schema.json '{"kind":"DataConnector","version":"v2","definition":{"name":"\($name)","url":{"singleUrl":{"value":"http://localhost:\($port)"}},"schema":$schema[0]}}' > pg-metadata.json
     112 + ```
     113 + 
     114 +3. Now you have the NDC Postgres connector running, and have obtained the Postgres metadata (`pg-metadata.json`) which is required for the V3 engine.
     115 + 
     116 +4. In `ddn-metadata.json` (from step 1.), replace the `HasuraHubDataConnector` objects with `DataConnector` objects generated inside the `pg-metadata.json` file.
     117 +
     118 +5. Remove the object for `kind: AuthConfig` from `ddn-metadata.json`, move it to a separate file `auth_config.json`, and remove the `kind` field from it.
     119 +
     120 +6. Remove the object for `kind: CompatibilityConfig` from `ddn-metadata.json`. If desired, a `flags` field can be added to the OSS metadata to enable the flags corresponding to that compatibility date in the DDN metadata.
     121 + 
     122 +7. Finally, start the v3-engine using the modified metadata using the following command (using the modified `ddn-metadata.json` and `auth_config.json` from Step 5):
     123 + ```bash
     124 + RUST_LOG=DEBUG cargo run --release --bin engine -- \
     125 + --metadata-path ddn-metadata.json auth_config.json
     126 + ```
     127 + You should have the v3-engine up and running at http://localhost:3000
     128 + 
     129 +**Note**: We understand that these steps are not very straightforward, and we intend to continuously improve the developer experience of running OSS V3 Engine.
     130 + 
     131 +## Running tests
     132 + 
     133 +To run the test suite, you need to docker login to `ghcr.io` first:
     134 + 
     135 +```bash
     136 +docker login -u <username> -p <token> ghcr.io
     137 +```
     138 + 
     139 +where `username` is your github username, and `token` is your github PAT. The PAT needs to have the `read:packages` scope and `Hasura SSO` configured. See [this](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic) for more details.
     140 + 
     141 + 
     142 +Next run the postgres NDC locally using `docker compose up postgres_connector` and point the host name `postgres_connector` to localhost in your `/etc/hosts` file.
     143 + 
     144 +Next run the custom NDC locally using `docker compose up custom_connector` and point the host name `custom_connector` to localhost in your `/etc/hosts` file OR you can run `cargo run --bin agent` and then do `cargo test`.
     145 + 
     146 +### Testing/Development with the chinook database
     147 + 
     148 +The `engine/tests/chinook` contains static files required to run v3-engine run with the chinook database as a data connector.
     149 + 
     150 +To get this running, you can run the following command:
     151 + 
     152 +```bash
     153 +METADATA_PATH=engine/tests/schema.json AUTHN_CONFIG_PATH=auth_config.json docker compose up postgres_connector engine
     154 +```
     155 + 
     156 +If you are running the v3-engine locally through cargo, then you'll need to update the value of the
     157 +`singleUrl` present in the `engine/tests/chinook/chinook_engine_metadata.json**
     158 +from http://postgres_connector:8100 to http://localhost:8100 .
     159 + 
     160 +### Running tests with a single command
     161 + 
     162 +Alternatively, the tests can be run in the same Docker image as CI:
     163 + 
     164 +```sh
     165 +just test
     166 +```
     167 + 
     168 +### Updating goldenfiles
     169 + 
     170 +There are some tests where we compare the output of the test against an expected golden file. If you make some changes which expectedly change the goldenfile, you can regenerate them like this:
     171 + 
     172 +Locally (with postgres_connector pointing to localhost)
     173 + 
     174 +```sh
     175 + REGENERATE_GOLDENFILES=1 cargo test
     176 +```
     177 + 
     178 +Docker:
     179 + 
     180 +```sh
     181 + just update-golden-files
     182 +```
     183 + 
     184 +### Running coverage report
     185 + 
     186 +We can check for coverage of unit tests by running:
     187 + 
     188 +```sh
     189 +just coverage
     190 +```
     191 + 
     192 +You can also give a filter expression (which is passed to `grep -E`) to give coverage only for matched files:
     193 + 
     194 +```sh
     195 +just coverage "open-dds|engine"
     196 +```
     197 + 
     198 +## Run benchmarks
     199 + 
     200 +The benchmarks operate against the reference agent using the same test cases as the test suite, and need a similar setup.
     201 + 
     202 +To run benchmarks for the lexer, parser and validation:
     203 + 
     204 +```bash
     205 +cargo bench -p lang-graphql "lexer"
     206 +cargo bench -p lang-graphql "parser"
     207 +cargo bench -p lang-graphql "validation/.*"
     208 +```
     209 + 
     210 +Alternatively, the benchmarks can be run in the same Docker image as CI:
     211 + 
     212 +```sh
     213 +just ci-bench
     214 +```
     215 + 
  • ■ ■ ■ ■ ■ ■
    v3/auth_config.json
     1 +{
     2 + "version": "v1",
     3 + "definition": {
     4 + "allowRoleEmulationBy": "admin",
     5 + "mode": {
     6 + "webhook": {
     7 + "url": "http://auth_hook:3050/validate-request",
     8 + "method": "Post"
     9 + }
     10 + }
     11 + }
     12 +}
     13 + 
  • ■ ■ ■ ■ ■ ■
    v3/benchmark.sh
     1 +# Benchmark script to compare the benchmarks caused by the changes of the
     2 +# current PR against the main branch
     3 + 
     4 +# Run the benchmarks on the current PR and save it in a JSON file
     5 +#
     6 +# TODO: naveen, Benchmark seperate targets and merge their JSON. Currently,
     7 +# `--save-baseline` only supports saving benchmarks for a single target. For
     8 +# now, we'll benchmark only the `execute` target
     9 +# See: https://bheisler.github.io/criterion.rs/book/faq.html#cargo-bench-gives-unrecognized-option-errors-for-valid-command-line-options
     10 +#
     11 +# cargo bench --bench execute --bench generate_ir --bench validation --bench parser -- --save-baseline current-benchmarks;
     12 +cargo bench --bench execute -- --save-baseline current-benchmarks;
     13 +critcmp --export current-benchmarks > current-benchmarks.json;
     14 + 
     15 +# Clone into the main branch of v3-engine
     16 +# https://[email protected]/hasura/v3-engine.git
     17 +git clone --branch main https://[email protected]/hasura/v3-engine.git main-copy;
     18 +pushd main-copy;
     19 +# Run the benchmarks on the main branch and save it in a JSON file
     20 +cargo bench --bench execute -- --save-baseline main-benchmarks;
     21 +critcmp --export main-benchmarks > /app/main-benchmarks.json;
     22 +popd
     23 + 
     24 +# Compare between the benchmarks of main and the benchmarks generated by the PR
     25 +critcmp main-benchmarks.json current-benchmarks.json > diff.txt;
     26 + 
     27 +# Format the result of benchmark into a Code comment, so that when we
     28 +# comment on the PR, the benchmark results are displayed in a nice format
     29 +sed -i '1s/^/Benchmark report\n ```\n/' diff.txt;
     30 +echo '```' >> diff.txt;
     31 + 
     32 +# Format the results into a json format
     33 +# { "body": "<BENCHMARK AS STRING>""}
     34 +echo '{}' | jq --rawfile diff diff.txt '.body=$diff' > bench_comment.json
     35 + 
     36 +curl -s -H "Authorization: token $2" \
     37 + -X GET "https://api.github.com/repos/hasura/v3-engine/issues/$1/comments" > /tmp/comments.json
     38 +comment_id=$(jq 'map(select(.body | contains ("Benchmark report"))) | last | .id' /tmp/comments.json)
     39 +echo $comment_id
     40 +# Post the benchmark as comment on PR or update existing one
     41 +if [ "$comment_id" = "null" ]; then
     42 + curl -L \
     43 + -X POST \
     44 + -H "Accept: application/vnd.github+json" \
     45 + -H "Authorization: Bearer $2" \
     46 + -H "X-GitHub-Api-Version: 2022-11-28" \
     47 + "https://api.github.com/repos/hasura/v3-engine/issues/$1/comments" \
     48 + -d @bench_comment.json
     49 +else
     50 + curl -L \
     51 + -X PATCH \
     52 + -H "Accept: application/vnd.github+json" \
     53 + -H "Authorization: Bearer $2" \
     54 + -H "X-GitHub-Api-Version: 2022-11-28" \
     55 + "https://api.github.com/repos/hasura/v3-engine/issues/comments/$comment_id" \
     56 + -d @bench_comment.json
     57 +fi
     58 + 
  • ■ ■ ■ ■ ■ ■
    v3/ci.docker-compose.yaml
     1 +version: "3.9"
     2 +services:
     3 + postgres:
     4 + image: postgis/postgis:16-3.4
     5 + platform: linux/amd64
     6 + command:
     7 + - -F # turn fsync off for speed
     8 + - -N 1000 # increase max connections from 100 so we can run more HGEs
     9 + environment:
     10 + POSTGRES_PASSWORD: "password"
     11 + volumes:
     12 + - type: volume
     13 + source: postgres
     14 + target: /var/lib/postgresql/data
     15 + - type: bind
     16 + source: ./engine/tests/db_definition.sql
     17 + target: /docker-entrypoint-initdb.d/db_definition.sql
     18 + read_only: true
     19 + healthcheck:
     20 + test:
     21 + - CMD-SHELL
     22 + - psql -U "$${POSTGRES_USER:-postgres}" < /dev/null && sleep 5 && psql -U "$${POSTGRES_USER:-postgres}" < /dev/null
     23 + start_period: 5s
     24 + interval: 5s
     25 + timeout: 10s
     26 + retries: 20
     27 + postgres_connector:
     28 + image: ghcr.io/hasura/ndc-postgres:dev-main-5aec135c1
     29 + volumes:
     30 + - ./engine/tests/pg_ndc_config.json:/config.json
     31 + depends_on:
     32 + postgres:
     33 + condition: service_healthy
     34 + healthcheck:
     35 + test: curl --fail http://localhost:8100/schema || exit 1
     36 + start_period: 5s
     37 + interval: 5s
     38 + timeout: 10s
     39 + retries: 20
     40 + command: ["serve", "--configuration", "/config.json"]
     41 + custom_connector:
     42 + build:
     43 + context: .
     44 + target: builder
     45 + healthcheck:
     46 + test: curl --fail http://localhost:8101/schema || exit 1
     47 + start_period: 5s
     48 + interval: 5s
     49 + timeout: 10s
     50 + retries: 20
     51 + command: ["sh", "-c", "cargo run --bin agent"]
     52 + source_only:
     53 + build:
     54 + context: .
     55 + target: builder
     56 + ssh:
     57 + - default
     58 + volumes:
     59 + # So that updated files make their way back to the host machine
     60 + - ./tracing-util:/app/tracing-util
     61 + - ./lang-graphql:/app/lang-graphql
     62 + - ./open-dds:/app/open-dds
     63 + - ./engine:/app/engine
     64 + - ./hasura-authn-core:/app/hasura-authn-core
     65 + - ./hasura-authn-webhook:/app/hasura-authn-webhook
     66 + - ./coverage:/app/coverage
     67 + test_setup:
     68 + build:
     69 + context: .
     70 + target: builder
     71 + ssh:
     72 + - default
     73 + depends_on:
     74 + - postgres
     75 + - postgres_connector
     76 + - custom_connector
     77 + volumes:
     78 + # So that updated files make their way back to the host machine
     79 + - ./tracing-util:/app/tracing-util
     80 + - ./lang-graphql:/app/lang-graphql
     81 + - ./open-dds:/app/open-dds
     82 + - ./engine:/app/engine
     83 + - ./hasura-authn-core:/app/hasura-authn-core
     84 + - ./hasura-authn-webhook:/app/hasura-authn-webhook
     85 + - ./benchmark.sh:/app/benchmark.sh
     86 + - ./coverage:/app/coverage
     87 + 
     88 +volumes:
     89 + postgres:
     90 + 
  • ■ ■ ■ ■ ■ ■
    v3/coverage.sh
     1 +#!/bin/bash
     2 + 
     3 +set -e
     4 + 
     5 +export TEST_FAILURE=0
     6 +# NOTE: We only run unit tests via --lib and --bins flags
     7 +RUSTFLAGS="-Cinstrument-coverage" LLVM_PROFILE_FILE="cargo-test-%p-%m.profraw" cargo test --lib --bins || TEST_FAILURE=1
     8 +if [ "$TEST_FAILURE" -eq 1 ]; then
     9 + echo "WARNING: test failures found..."
     10 +fi
     11 +echo -e "\ngenerating coverage report...\n"
     12 +mkdir -p coverage
     13 +grcov . --binary-path ./target/debug/deps/ -s . -t markdown -p "${CARGO_HOME:=.cargo}" --branch --ignore-not-existing --ignore "../*" --ignore "/*" -o coverage
     14 +# Strip header and footer and then sort
     15 +(tail -n +3 coverage/markdown.md | head -n -2 | sort) > /tmp/sorted.md
     16 +# Filter filepaths based on input argument
     17 +if [ -n "$1" ]; then
     18 + (grep -E "$1" /tmp/sorted.md > /tmp/sorted_filtered.md) || (echo "No relevant files found for coverage in current changelist" > /tmp/sorted_filtered.md)
     19 +else
     20 + cat /tmp/sorted.md > /tmp/sorted_filtered.md
     21 +fi
     22 +# Add header and footer back
     23 +(head -n 2 coverage/markdown.md && cat /tmp/sorted_filtered.md && tail -n 2 coverage/markdown.md ) > coverage/markdown_sorted.md
     24 +cat coverage/markdown_sorted.md
     25 +rm -rf **/*.profraw
     26 + 
  • ■ ■ ■ ■ ■ ■
    v3/custom-connector/Cargo.toml
     1 +[package]
     2 +name = "custom-connector"
     3 +version = "0.1.0"
     4 +edition = "2021"
     5 + 
     6 +[dependencies]
     7 +ndc-client = { git = "https://github.com/hasura/ndc-spec.git", tag = "v0.1.0-rc.12" }
     8 +indexmap = "2"
     9 +serde_json = "^1.0.92"
     10 +tokio = { version = "^1.26.0", features = [
     11 + "macros",
     12 + "rt-multi-thread",
     13 + "parking_lot",
     14 +] }
     15 +axum = "^0.6.9"
     16 +regex = "^1.7.3"
     17 +csv = "^1.2.2"
     18 +prometheus = "^0.13.3"
     19 + 
  • ■ ■ ■ ■ ■ ■
    v3/custom-connector/src/actors.csv
     1 +id,name,movie_id
     2 +1,Leonardo DiCaprio,1
     3 +2,Kate Winslet,1
     4 +3,Irfan Khan,2
     5 +4,Al Pacino,3
     6 +5,Robert De Niro,3
     7 + 
  • v3/custom-connector/src/bin/agent/main.rs
    Diff is too large to be displayed.
  • ■ ■ ■ ■ ■
    v3/custom-connector/src/movies.csv
     1 +id,title,rating
     2 +1,Titanic,4
     3 +2,Slumdog Millionaire,5
     4 +3,Godfather,4
     5 + 
  • ■ ■ ■ ■ ■ ■
    v3/docker-compose.yaml
     1 +version: "3.9"
     2 +services:
     3 + postgres:
     4 + image: postgis/postgis:16-3.4
     5 + platform: linux/amd64
     6 + restart: always
     7 + command:
     8 + - -F # turn fsync off for speed
     9 + - -N 1000 # increase max connections from 100 so we can run more HGEs
     10 + ports:
     11 + - 64001:5432
     12 + environment:
     13 + POSTGRES_PASSWORD: "password"
     14 + volumes:
     15 + - type: volume
     16 + source: postgres
     17 + target: /var/lib/postgresql/data
     18 + - type: bind
     19 + source: ./engine/tests/db_definition.sql
     20 + target: /docker-entrypoint-initdb.d/db_definition.sql
     21 + read_only: true
     22 + healthcheck:
     23 + test:
     24 + - CMD-SHELL
     25 + - psql -U "$${POSTGRES_USER:-postgres}" < /dev/null && sleep 5 && psql -U "$${POSTGRES_USER:-postgres}" < /dev/null
     26 + start_period: 5s
     27 + interval: 5s
     28 + timeout: 10s
     29 + retries: 20
     30 + reference_agent:
     31 + build: https://github.com/hasura/ndc-spec.git#v0.1.0-rc.11
     32 + ports:
     33 + - 8102:8100
     34 + auth_hook:
     35 + ports:
     36 + - "3050:3050"
     37 + build: ./hasura-authn-webhook/dev-auth-webhook
     38 + jaeger:
     39 + image: jaegertracing/all-in-one:1.37
     40 + restart: always
     41 + ports:
     42 + - 5775:5775/udp
     43 + - 6831:6831/udp
     44 + - 6832:6832/udp
     45 + - 5778:5778
     46 + - 4002:16686
     47 + - 14250:14250
     48 + - 14268:14268
     49 + - 14269:14269
     50 + - 4317:4317 # OTLP gRPC
     51 + - 4318:4318 # OTLP HTTP
     52 + - 9411:9411
     53 + environment:
     54 + COLLECTOR_OTLP_ENABLED: 'true'
     55 + COLLECTOR_ZIPKIN_HOST_PORT: '9411'
     56 + postgres_connector:
     57 + image: ghcr.io/hasura/ndc-postgres:dev-main-5aec135c1
     58 + ports:
     59 + - 8100:8100
     60 + volumes:
     61 + - ./engine/tests/pg_ndc_config.json:/config.json
     62 + depends_on:
     63 + postgres:
     64 + condition: service_healthy
     65 + command: ["serve", "--configuration", "/config.json", "--otlp-endpoint", "http://jaeger:4317"]
     66 + custom_connector:
     67 + build:
     68 + context: .
     69 + target: builder
     70 + ports:
     71 + - 8101:8101
     72 + healthcheck:
     73 + test: curl --fail http://localhost:8101/schema || exit 1
     74 + start_period: 5s
     75 + interval: 5s
     76 + timeout: 10s
     77 + retries: 20
     78 + command: ["sh", "-c", "cargo run --bin agent"]
     79 + engine:
     80 + build:
     81 + context: .
     82 + target: builder
     83 + ssh:
     84 + - default
     85 + depends_on:
     86 + - reference_agent
     87 + - jaeger
     88 + - auth_hook
     89 + 
     90 + command: ["sh", "-c", "cargo run --bin engine -- --metadata-path $METADATA_PATH --otlp-endpoint http://jaeger:4317 --authn-config-path $AUTHN_CONFIG_PATH"]
     91 + ports:
     92 + # Binding to localhost:3001 avoids conflict with dev_setup
     93 + - 3001:3000
     94 + 
     95 + dev_setup:
     96 + build:
     97 + context: .
     98 + target: builder
     99 + ssh:
     100 + - default
     101 + depends_on:
     102 + - jaeger
     103 + - auth_hook
     104 + - postgres
     105 + - postgres_connector
     106 + - custom_connector
     107 + volumes:
     108 + - ./tracing-util:/app/tracing-util
     109 + - ./lang-graphql:/app/lang-graphql
     110 + - ./open-dds:/app/open-dds
     111 + - ./engine:/app/engine
     112 + - ./hasura-authn-core:/app/hasura-authn-core
     113 + - ./hasura-authn-webhook:/app/hasura-authn-webhook
     114 + ports:
     115 + - 3000:3000
     116 + 
     117 +volumes:
     118 + postgres:
     119 + 
  • ■ ■ ■ ■ ■ ■
    v3/drill.yml
     1 +---
     2 + 
     3 +concurrency: 10
     4 +base: 'http://localhost:3000'
     5 +iterations: 10000
     6 + 
     7 +plan:
     8 + - name: __typename query
     9 + request:
     10 + url: /graphql
     11 + method: POST
     12 + body: '{ "query": "query { __typename }" }'
     13 + headers:
     14 + Content-Type: 'application/json'
     15 + Host: example.hasura.app
     16 + - name: AlbumByID query
     17 + request:
     18 + url: /graphql
     19 + method: POST
     20 + body: '{ "query": "query { AlbumByID(AlbumId: 1) { Title } }" }'
     21 + headers:
     22 + Content-Type: 'application/json'
     23 + Host: example.hasura.app
     24 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/Cargo.toml
     1 +[package]
     2 +name = "engine"
     3 +version = "3.0.0"
     4 +edition = "2021"
     5 + 
     6 +# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
     7 + 
     8 +[[bin]]
     9 +name = "engine"
     10 +path = "bin/engine/main.rs"
     11 + 
     12 +[dependencies]
     13 +indexmap = { version = "2", features = ["serde"] }
     14 +thiserror = "1.0"
     15 +hasura-authn-core = { path = "../hasura-authn-core" }
     16 +lang-graphql = { path = "../lang-graphql" }
     17 +tracing-util = { path = "../tracing-util" }
     18 +ndc-client = { git = "https://github.com/hasura/ndc-spec.git", tag = "v0.1.0-rc.12" }
     19 +open-dds = { path = "../open-dds" }
     20 +# util = { path = "../util" }
     21 +serde = "1.0.152"
     22 +serde_json = "1.0.92"
     23 +reqwest = { version = "^0.11", features = ["json", "multipart"] }
     24 +opentelemetry = "0.20.0"
     25 +opentelemetry_sdk = "0.20.0"
     26 +schemars = { version = "0.8.12", features = ["smol_str"] }
     27 +async-trait = "0.1.67"
     28 +derive_more = "0.99.17"
     29 +base64 = "0.21.2"
     30 +transitive = "0.5.0"
     31 +lazy_static = "1.4.0"
     32 +strum = { version = "^0.25.0" }
     33 +strum_macros = { version = "^0.25.2" }
     34 +itertools = "0.10.5"
     35 +url = "2.4.1"
     36 + 
     37 +# dependencies for authentication and execute
     38 +hasura-authn-webhook = { path = "../hasura-authn-webhook" }
     39 +hasura-authn-jwt = { path = "../hasura-authn-jwt" }
     40 +clap = { version = "4", features = ["derive", "env"] }
     41 +tokio = { version = "1.26.0", features = [
     42 + "macros",
     43 + "rt-multi-thread",
     44 + "parking_lot",
     45 +] }
     46 +axum = { version = "0.6.20" }
     47 +tower-http = { version = "0.4", features = ["trace", "cors"] }
     48 +uuid = "1.3.0"
     49 +bincode = "1.3.3"
     50 +regex = "1.7.3"
     51 +json_value_merge = "2.0"
     52 +async-recursion= "1.0.5"
     53 +nonempty = "0.8"
     54 + 
     55 +[dev-dependencies]
     56 +goldenfile = "1.4.5"
     57 +tokio-test = "0.4.2"
     58 +criterion = { version = "0.4", features = ["html_reports", "async_tokio"] }
     59 + 
     60 +[[bench]]
     61 +name = "execute"
     62 +harness = false
     63 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/benches/execute.rs
     1 +use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
     2 +use engine::execute::query_plan::{execute_query_plan, generate_query_plan};
     3 +use engine::execute::{execute_query_internal, generate_ir};
     4 +use engine::schema::GDS;
     5 +use hasura_authn_core::Identity;
     6 +use lang_graphql::http::RawRequest;
     7 +use open_dds::permissions::Role;
     8 +use std::collections::HashMap;
     9 +use std::fs;
     10 +use std::path::PathBuf;
     11 +use tokio::runtime::Runtime;
     12 + 
     13 +extern crate json_value_merge;
     14 +use json_value_merge::Merge;
     15 +use serde_json::Value;
     16 + 
     17 +use std::path::Path;
     18 + 
     19 +use lang_graphql as gql;
     20 + 
     21 +pub fn merge_with_common_metadata(
     22 + common_metadata_path: &Path,
     23 + metadata_path_string: &Path,
     24 +) -> Value {
     25 + let common_metadata = fs::read_to_string(common_metadata_path).unwrap();
     26 + let test_metadata = fs::read_to_string(metadata_path_string).unwrap();
     27 + 
     28 + let mut first_json_value: Value = serde_json::from_str(&common_metadata.to_string()).unwrap();
     29 + let second_json_value: Value = serde_json::from_str(&test_metadata.to_string()).unwrap();
     30 + first_json_value.merge(&second_json_value);
     31 + first_json_value
     32 +}
     33 + 
     34 +pub fn bench_execute(
     35 + c: &mut Criterion,
     36 + test_path_string: &str,
     37 + common_metadata_path_string: &str,
     38 + benchmark_group: &str,
     39 +) {
     40 + let root_test_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("tests");
     41 + let test_path = root_test_dir.join(test_path_string);
     42 + let request_path = test_path.join("request.gql");
     43 + 
     44 + let common_metadata_path = root_test_dir.join(common_metadata_path_string);
     45 + let metadata_path = test_path.join("metadata.json");
     46 + let metadata = merge_with_common_metadata(&metadata_path, &common_metadata_path);
     47 + 
     48 + let gds = GDS::new(&metadata.to_string()).unwrap();
     49 + let schema = GDS::build_schema(&gds).unwrap();
     50 + 
     51 + let http_client = reqwest::Client::new();
     52 + let runtime = Runtime::new().unwrap();
     53 + 
     54 + let query = fs::read_to_string(request_path).unwrap();
     55 + let raw_request = RawRequest {
     56 + operation_name: None,
     57 + query,
     58 + variables: None,
     59 + };
     60 + 
     61 + let session = Identity::admin(Role::new("admin"))
     62 + .get_role_authorization(None)
     63 + .unwrap()
     64 + .build_session(HashMap::new());
     65 + 
     66 + let mut group = c.benchmark_group(benchmark_group);
     67 + 
     68 + // Parse request
     69 + group.bench_with_input(
     70 + BenchmarkId::new("bench", "Resolution of raw request"),
     71 + &(&runtime, &raw_request),
     72 + |b, (runtime, request)| {
     73 + b.to_async(*runtime).iter(|| async {
     74 + gql::parser::Parser::new(&request.query)
     75 + .parse_executable_document()
     76 + .unwrap()
     77 + })
     78 + },
     79 + );
     80 + 
     81 + let query = gql::parser::Parser::new(&raw_request.query)
     82 + .parse_executable_document()
     83 + .unwrap();
     84 + 
     85 + // Normalize request
     86 + let request = gql::http::Request {
     87 + operation_name: None,
     88 + query,
     89 + variables: HashMap::default(),
     90 + };
     91 + 
     92 + group.bench_with_input(
     93 + BenchmarkId::new("bench_execute", "Normalize request"),
     94 + &(&runtime, &schema, &request),
     95 + |b, (runtime, schema, request)| {
     96 + b.to_async(*runtime).iter(|| async {
     97 + gql::validation::normalize_request(&session.role, schema, request).unwrap();
     98 + })
     99 + },
     100 + );
     101 + 
     102 + let normalized_request =
     103 + gql::validation::normalize_request(&session.role, &schema, &request).unwrap();
     104 + 
     105 + // Generate IR
     106 + group.bench_with_input(
     107 + BenchmarkId::new("bench_execute", "Generate IR"),
     108 + &(&runtime, &schema),
     109 + |b, (runtime, schema)| {
     110 + b.to_async(*runtime)
     111 + .iter(|| async { generate_ir(schema, &session, &normalized_request).unwrap() })
     112 + },
     113 + );
     114 + 
     115 + let ir = generate_ir(&schema, &session, &normalized_request).unwrap();
     116 + 
     117 + // Generate Query Plan
     118 + group.bench_with_input(
     119 + BenchmarkId::new("bench_execute", "Generate Query Plan"),
     120 + &(&runtime),
     121 + |b, runtime| {
     122 + b.to_async(*runtime)
     123 + .iter(|| async { generate_query_plan(&ir).unwrap() })
     124 + },
     125 + );
     126 + 
     127 + // Execute Query plan
     128 + group.bench_with_input(
     129 + BenchmarkId::new("bench_execute", "Execute Query Plan"),
     130 + &(&runtime),
     131 + |b, runtime| {
     132 + b.to_async(*runtime).iter(|| async {
     133 + execute_query_plan(&http_client, generate_query_plan(&ir).unwrap()).await
     134 + })
     135 + },
     136 + );
     137 + 
     138 + // Total execution time from start to finish
     139 + group.bench_with_input(
     140 + BenchmarkId::new("bench_execute", "Total Execution time"),
     141 + &(&runtime, &schema, raw_request),
     142 + |b, (runtime, schema, request)| {
     143 + b.to_async(*runtime).iter(|| async {
     144 + execute_query_internal(&http_client, schema, &session, request.clone())
     145 + .await
     146 + .unwrap()
     147 + })
     148 + },
     149 + );
     150 + 
     151 + group.finish();
     152 +}
     153 + 
     154 +fn bench_execute_all(c: &mut Criterion) {
     155 + // Simple select
     156 + let test_path_string = "execute/models/select_one/simple_select";
     157 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     158 + bench_execute(
     159 + c,
     160 + test_path_string,
     161 + common_metadata_path_string,
     162 + "simple_select",
     163 + );
     164 + 
     165 + // Select Many
     166 + let test_path_string = "execute/models/select_many/simple_select";
     167 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     168 + bench_execute(
     169 + c,
     170 + test_path_string,
     171 + common_metadata_path_string,
     172 + "select_many",
     173 + );
     174 + 
     175 + // Select Many with where clause
     176 + let test_path_string = "execute/models/select_many/where/simple";
     177 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     178 + bench_execute(
     179 + c,
     180 + test_path_string,
     181 + common_metadata_path_string,
     182 + "select_many_where",
     183 + );
     184 + 
     185 + // Object Relationships
     186 + let test_path_string = "execute/relationships/object";
     187 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     188 + bench_execute(
     189 + c,
     190 + test_path_string,
     191 + common_metadata_path_string,
     192 + "object_relationship",
     193 + );
     194 + 
     195 + // Array Relationships
     196 + let test_path_string = "execute/relationships/array";
     197 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     198 + bench_execute(
     199 + c,
     200 + test_path_string,
     201 + common_metadata_path_string,
     202 + "array_relationship",
     203 + );
     204 + 
     205 + // Relay node field
     206 + let test_path_string = "execute/relay/relay";
     207 + let common_metadata_path_string = "execute/common_metadata/postgres_connector_schema.json";
     208 + bench_execute(
     209 + c,
     210 + test_path_string,
     211 + common_metadata_path_string,
     212 + "relay_node_field",
     213 + );
     214 +}
     215 + 
     216 +criterion_group!(benches, bench_execute_all);
     217 +criterion_main!(benches);
     218 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/benches/generate_ir.rs
     1 +use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
     2 +use engine::schema::GDS;
     3 +use hasura_authn_core::Identity;
     4 +use lang_graphql::http::Request;
     5 +use lang_graphql::parser::Parser;
     6 +use lang_graphql::validation::normalize_request;
     7 +use open_dds::permissions::Role;
     8 +use std::collections::HashMap;
     9 +use std::fs;
     10 +use std::path::PathBuf;
     11 + 
     12 +use engine::execute;
     13 + 
     14 +pub fn bench_generate_ir(c: &mut Criterion) {
     15 + let test_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("tests");
     16 + let schema = fs::read_to_string(test_dir.join("schema.json")).unwrap();
     17 + 
     18 + let gds = GDS::new(&schema).unwrap();
     19 + let schema = GDS::build_schema(&gds).unwrap();
     20 + 
     21 + let mut group = c.benchmark_group("generate_ir");
     22 + 
     23 + let session = Identity::admin(Role::new("admin"))
     24 + .get_role_authorization(None)
     25 + .unwrap()
     26 + .build_session(HashMap::new());
     27 + 
     28 + for input_file in fs::read_dir(test_dir.join("generate_ir")).unwrap() {
     29 + let entry = input_file.unwrap();
     30 + let raw_request = {
     31 + let path = entry.path();
     32 + assert!(path.is_dir());
     33 + fs::read_to_string(path.join("request.gql")).unwrap()
     34 + };
     35 + let path = entry.path();
     36 + let test_name = path.file_name().unwrap().to_str().unwrap();
     37 + 
     38 + let query = Parser::new(&raw_request)
     39 + .parse_executable_document()
     40 + .unwrap();
     41 + 
     42 + let request = Request {
     43 + operation_name: None,
     44 + query,
     45 + variables: HashMap::new(),
     46 + };
     47 + 
     48 + let normalized_request = normalize_request(&session.role, &schema, &request).unwrap();
     49 + 
     50 + group.bench_with_input(
     51 + BenchmarkId::new("generate_ir", test_name),
     52 + &(&schema, &normalized_request),
     53 + |b, (schema, normalized_request)| {
     54 + b.iter(|| execute::generate_ir(schema, &session, normalized_request).unwrap())
     55 + },
     56 + );
     57 + }
     58 + group.finish();
     59 +}
     60 + 
     61 +criterion_group!(benches, bench_generate_ir);
     62 +criterion_main!(benches);
     63 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/bin/engine/index.html
     1 +<!--
     2 + * Copyright (c) 2021 GraphQL Contributors
     3 + * All rights reserved.
     4 + *
     5 + * This source code is licensed under the license found in the
     6 + * LICENSE file in the root directory of this source tree.
     7 +-->
     8 +<!-- Copied from https://github.com/graphql/graphiql/blob/main/examples/graphiql-cdn/index.html -->
     9 +<!doctype html>
     10 +<html lang="en">
     11 + <head>
     12 + <title>GraphiQL</title>
     13 + <style>
     14 + body {
     15 + height: 100%;
     16 + margin: 0;
     17 + width: 100%;
     18 + overflow: hidden;
     19 + }
     20 + 
     21 + #graphiql {
     22 + height: 100vh;
     23 + }
     24 + </style>
     25 + <!--
     26 + This GraphiQL example depends on Promise and fetch, which are available in
     27 + modern browsers, but can be "polyfilled" for older browsers.
     28 + GraphiQL itself depends on React DOM.
     29 + If you do not want to rely on a CDN, you can host these files locally or
     30 + include them directly in your favored resource bundler.
     31 + -->
     32 + <script
     33 + crossorigin
     34 + src="https://unpkg.com/react@18/umd/react.development.js"
     35 + ></script>
     36 + <script
     37 + crossorigin
     38 + src="https://unpkg.com/react-dom@18/umd/react-dom.development.js"
     39 + ></script>
     40 + <!--
     41 + These two files can be found in the npm module, however you may wish to
     42 + copy them directly into your environment, or perhaps include them in your
     43 + favored resource bundler.
     44 + -->
     45 + <script
     46 + src="https://unpkg.com/graphiql/graphiql.min.js"
     47 + type="application/javascript"
     48 + ></script>
     49 + <link rel="stylesheet" href="https://unpkg.com/graphiql/graphiql.min.css" />
     50 + <!--
     51 + These are imports for the GraphIQL Explorer plugin.
     52 + -->
     53 + <script
     54 + src="https://unpkg.com/@graphiql/plugin-explorer/dist/index.umd.js"
     55 + crossorigin
     56 + ></script>
     57 + 
     58 + <link
     59 + rel="stylesheet"
     60 + href="https://unpkg.com/@graphiql/plugin-explorer/dist/style.css"
     61 + />
     62 + </head>
     63 + 
     64 + <body>
     65 + <div id="graphiql">Loading...</div>
     66 + <script>
     67 + const root = ReactDOM.createRoot(document.getElementById('graphiql'));
     68 + const fetcher = GraphiQL.createFetcher({
     69 + url: '/graphql',
     70 + // default header so that we don't have to add it manually in local dev
     71 + // headers: { 'x-hasura-role': 'admin' },
     72 + });
     73 + const explorerPlugin = GraphiQLPluginExplorer.explorerPlugin();
     74 + root.render(
     75 + React.createElement(GraphiQL, {
     76 + fetcher,
     77 + defaultEditorToolsVisibility: true,
     78 + plugins: [explorerPlugin],
     79 + }),
     80 + );
     81 + </script>
     82 + </body>
     83 +</html>
     84 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/bin/engine/main.rs
     1 +use axum::{
     2 + body::HttpBody,
     3 + extract::State,
     4 + http::{HeaderMap, Request},
     5 + middleware::Next,
     6 + response::{Html, IntoResponse},
     7 + routing::{get, post},
     8 + Extension, Json, Router,
     9 +};
     10 +use clap::Parser;
     11 + 
     12 +use ::engine::authentication::{AuthConfig, AuthConfig::V1 as V1AuthConfig, AuthModeConfig};
     13 +use engine::schema::GDS;
     14 +use hasura_authn_core::Session;
     15 +use hasura_authn_jwt::auth as jwt_auth;
     16 +use hasura_authn_jwt::jwt;
     17 +use hasura_authn_webhook::webhook;
     18 +use lang_graphql as gql;
     19 +use std::path::PathBuf;
     20 +use std::{fmt::Display, sync::Arc};
     21 +use tower_http::trace::TraceLayer;
     22 +use tracing_util::{
     23 + add_event_on_active_span, set_status_on_current_span, ErrorVisibility, SpanVisibility,
     24 + TraceableError, TraceableHttpResponse,
     25 +};
     26 + 
     27 +#[derive(Parser)]
     28 +struct ServerOptions {
     29 + #[arg(long, value_name = "METADATA_FILE", env = "METADATA_PATH")]
     30 + metadata_path: PathBuf,
     31 + #[arg(long, value_name = "OTLP_ENDPOINT", env = "OTLP_ENDPOINT")]
     32 + otlp_endpoint: Option<String>,
     33 + #[arg(long, value_name = "AUTHN_CONFIG_FILE", env = "AUTHN_CONFIG_PATH")]
     34 + authn_config_path: PathBuf,
     35 + #[arg(long, value_name = "SERVER_PORT", env = "PORT")]
     36 + port: Option<i32>,
     37 +}
     38 + 
     39 +struct EngineState {
     40 + http_client: reqwest::Client,
     41 + schema: gql::schema::Schema<GDS>,
     42 + auth_config: AuthConfig,
     43 +}
     44 + 
     45 +#[tokio::main]
     46 +async fn main() {
     47 + let server = ServerOptions::parse();
     48 + 
     49 + let tracer = tracing_util::start_tracer(
     50 + server.otlp_endpoint.clone(),
     51 + "graphql-engine",
     52 + env!("CARGO_PKG_VERSION").to_string(),
     53 + )
     54 + .unwrap();
     55 + 
     56 + if let Err(e) = tracer
     57 + .in_span_async("app init", SpanVisibility::Internal, || {
     58 + Box::pin(start_engine(&server))
     59 + })
     60 + .await
     61 + {
     62 + println!("Error while starting up the engine: {e}");
     63 + }
     64 + 
     65 + tracing_util::shutdown_tracer();
     66 +}
     67 + 
     68 +#[derive(thiserror::Error, Debug)]
     69 +enum StartupError {
     70 + #[error("could not read the auth config - {0}")]
     71 + ReadAuth(String),
     72 + #[error("could not read the schema - {0}")]
     73 + ReadSchema(String),
     74 +}
     75 + 
     76 +impl TraceableError for StartupError {
     77 + fn visibility(&self) -> tracing_util::ErrorVisibility {
     78 + ErrorVisibility::User
     79 + }
     80 +}
     81 + 
     82 +async fn start_engine(server: &ServerOptions) -> Result<(), StartupError> {
     83 + let auth_config =
     84 + read_auth_config(&server.authn_config_path).map_err(StartupError::ReadAuth)?;
     85 + let schema = read_schema(&server.metadata_path).map_err(StartupError::ReadSchema)?;
     86 + let state = Arc::new(EngineState {
     87 + http_client: reqwest::Client::new(),
     88 + schema,
     89 + auth_config,
     90 + });
     91 + 
     92 + let graphql_route = Router::new()
     93 + .route("/graphql", post(handle_request))
     94 + .layer(axum::middleware::from_fn(
     95 + hasura_authn_core::resolve_session,
     96 + ))
     97 + .layer(axum::middleware::from_fn_with_state(
     98 + state.clone(),
     99 + authentication_middleware,
     100 + ))
     101 + .layer(axum::middleware::from_fn(
     102 + graphql_request_tracing_middleware,
     103 + ))
     104 + // *PLEASE DO NOT ADD ANY MIDDLEWARE
     105 + // BEFORE THE `graphql_request_tracing_middleware`*
     106 + // Refer to it for more details.
     107 + .layer(TraceLayer::new_for_http())
     108 + .with_state(state.clone());
     109 + 
     110 + let app = Router::new()
     111 + // serve graphiql at root
     112 + .route("/", get(graphiql))
     113 + .merge(graphql_route);
     114 + 
     115 + let addr = format!("0.0.0.0:{}", server.port.unwrap_or(3000));
     116 + 
     117 + let log = format!("starting server on {addr}");
     118 + println!("{log}");
     119 + add_event_on_active_span(log);
     120 + 
     121 + // run it with hyper on `addr`
     122 + axum::Server::bind(&addr.as_str().parse().unwrap())
     123 + .serve(app.into_make_service())
     124 + .await
     125 + .unwrap();
     126 + 
     127 + Ok(())
     128 +}
     129 + 
     130 +/// Middleware to start tracing of the `/graphql` request.
     131 +/// This middleware must be active for the entire duration
     132 +/// of the request i.e. this middleware should be the
     133 +/// entry point and the exit point of the GraphQL request.
     134 +async fn graphql_request_tracing_middleware<B: Send>(
     135 + request: Request<B>,
     136 + next: Next<B>,
     137 +) -> axum::response::Result<axum::response::Response> {
     138 + let tracer = tracing_util::global_tracer();
     139 + let path = "/graphql";
     140 + 
     141 + Ok(tracer
     142 + .in_span_async(path, SpanVisibility::User, || {
     143 + Box::pin(async move {
     144 + let response = next.run(request).await;
     145 + TraceableHttpResponse::new(response, path)
     146 + })
     147 + })
     148 + .await
     149 + .response)
     150 +}
     151 + 
     152 +#[derive(Debug, thiserror::Error)]
     153 +enum AuthError {
     154 + #[error("JWT auth error: {0}")]
     155 + Jwt(#[from] jwt::Error),
     156 + #[error("Webhook auth error: {0}")]
     157 + Webhook(#[from] webhook::Error),
     158 +}
     159 + 
     160 +impl TraceableError for AuthError {
     161 + fn visibility(&self) -> tracing_util::ErrorVisibility {
     162 + match self {
     163 + AuthError::Jwt(e) => e.visibility(),
     164 + AuthError::Webhook(e) => e.visibility(),
     165 + }
     166 + }
     167 +}
     168 + 
     169 +impl IntoResponse for AuthError {
     170 + fn into_response(self) -> axum::response::Response {
     171 + match self {
     172 + AuthError::Jwt(e) => e.into_response(),
     173 + AuthError::Webhook(e) => e.into_response(),
     174 + }
     175 + }
     176 +}
     177 + 
     178 +/// This middleware authenticates the incoming GraphQL request according to the
     179 +/// authentication configuration present in the `auth_config` of `EngineState`. The
     180 +/// result of the authentication is `hasura-authn-core::Identity`, which is then
     181 +/// made available to the GraphQL request handler.
     182 +async fn authentication_middleware<'a, B>(
     183 + State(engine_state): State<Arc<EngineState>>,
     184 + headers_map: HeaderMap,
     185 + mut request: Request<B>,
     186 + next: Next<B>,
     187 +) -> axum::response::Result<axum::response::Response>
     188 +where
     189 + B: HttpBody,
     190 + B::Error: Display,
     191 +{
     192 + let tracer = tracing_util::global_tracer();
     193 + 
     194 + let resolved_identity = tracer
     195 + .in_span_async(
     196 + "authentication_middleware",
     197 + SpanVisibility::Internal,
     198 + || {
     199 + Box::pin(async {
     200 + match &engine_state.auth_config {
     201 + V1AuthConfig(auth_config) => match &auth_config.mode {
     202 + AuthModeConfig::Webhook(webhook_config) => {
     203 + webhook::authenticate_request(
     204 + &engine_state.http_client,
     205 + webhook_config,
     206 + &headers_map,
     207 + auth_config.allow_role_emulation_by.clone(),
     208 + )
     209 + .await
     210 + .map_err(AuthError::from)
     211 + }
     212 + AuthModeConfig::Jwt(jwt_secret_config) => {
     213 + jwt_auth::authenticate_request(
     214 + &engine_state.http_client,
     215 + *jwt_secret_config.clone(),
     216 + auth_config.allow_role_emulation_by.clone(),
     217 + &headers_map,
     218 + )
     219 + .await
     220 + .map_err(AuthError::from)
     221 + }
     222 + },
     223 + }
     224 + })
     225 + },
     226 + )
     227 + .await?;
     228 + 
     229 + request.extensions_mut().insert(resolved_identity);
     230 + Ok(next.run(request).await)
     231 +}
     232 + 
     233 +async fn graphiql() -> Html<&'static str> {
     234 + Html(include_str!("index.html"))
     235 +}
     236 + 
     237 +async fn handle_request(
     238 + State(state): State<Arc<EngineState>>,
     239 + Extension(session): Extension<Session>,
     240 + Json(request): Json<gql::http::RawRequest>,
     241 +) -> gql::http::Response {
     242 + let tracer = tracing_util::global_tracer();
     243 + let response = tracer
     244 + .in_span_async("handle_request", SpanVisibility::User, || {
     245 + Box::pin(engine::execute::execute_query(
     246 + &state.http_client,
     247 + &state.schema,
     248 + &session,
     249 + request,
     250 + ))
     251 + })
     252 + .await;
     253 + 
     254 + // Set the span as error if the response contains an error
     255 + // NOTE: Ideally, we should mark the root span as error in `graphql_request_tracing_middleware` function,
     256 + // the tracing middleware, where the span is initialized. It is possible by completing the implementation
     257 + // of `Traceable` trait for `AxumResponse` struct. The said struct just wraps the `axum::response::Response`.
     258 + // The only way to determine the error is to inspect the status code from the `Response` struct.
     259 + // In `/graphql` API, all responses are sent with `200` OK including errors, which leaves no way to deduce errors in the tracing middleware.
     260 + set_status_on_current_span(&response);
     261 + response.0
     262 +}
     263 + 
     264 +fn read_schema(metadata_path: &PathBuf) -> Result<gql::schema::Schema<GDS>, String> {
     265 + let metadata = std::fs::read_to_string(metadata_path).map_err(|e| e.to_string())?;
     266 + engine::build::build_schema(&metadata).map_err(|e| e.to_string())
     267 +}
     268 + 
     269 +fn read_auth_config(path: &PathBuf) -> Result<AuthConfig, String> {
     270 + let raw_auth_config = std::fs::read_to_string(path).map_err(|e| e.to_string())?;
     271 + serde_json::from_str(&raw_auth_config).map_err(|e| e.to_string())
     272 +}
     273 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/authentication.rs
     1 +use hasura_authn_core::Role;
     2 +use hasura_authn_jwt::jwt;
     3 +use hasura_authn_webhook::webhook;
     4 +use schemars::JsonSchema;
     5 +use serde::{Deserialize, Serialize};
     6 + 
     7 +#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, PartialEq)]
     8 +#[serde(rename_all = "camelCase")]
     9 +#[schemars(title = "AuthModeConfig")]
     10 +/// The configuration for the authentication mode to use - webhook or JWT.
     11 +pub enum AuthModeConfig {
     12 + Webhook(webhook::AuthHookConfig),
     13 + Jwt(Box<jwt::JWTConfig>),
     14 +}
     15 + 
     16 +#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, PartialEq)]
     17 +#[serde(tag = "version", content = "definition")]
     18 +#[serde(rename_all = "camelCase")]
     19 +#[serde(deny_unknown_fields)]
     20 +#[schemars(title = "AuthConfig")]
     21 +/// Definition of the authentication configuration used by the API server.
     22 +pub enum AuthConfig {
     23 + V1(AuthConfigV1),
     24 +}
     25 + 
     26 +#[derive(Serialize, Deserialize, Debug, Clone, JsonSchema, PartialEq)]
     27 +#[serde(rename_all = "camelCase")]
     28 +#[serde(deny_unknown_fields)]
     29 +#[schemars(title = "AuthConfigV1")]
     30 +#[schemars(example = "AuthConfigV1::example")]
     31 +/// Definition of the authentication configuration used by the API server.
     32 +pub struct AuthConfigV1 {
     33 + pub allow_role_emulation_by: Option<Role>,
     34 + pub mode: AuthModeConfig,
     35 +}
     36 + 
     37 +impl AuthConfigV1 {
     38 + fn example() -> Self {
     39 + serde_json::from_str(
     40 + r#"
     41 + {
     42 + "allowRoleEmulationBy": "admin",
     43 + "mode": {
     44 + "webhook": {
     45 + "url": "http://auth_hook:3050/validate-request",
     46 + "method": "Post"
     47 + }
     48 + }
     49 + }
     50 + "#,
     51 + )
     52 + .unwrap()
     53 + }
     54 +}
     55 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/build.rs
     1 +use crate::metadata;
     2 +use crate::schema;
     3 +use crate::schema::GDS;
     4 +use lang_graphql::schema as gql_schema;
     5 +use thiserror::Error;
     6 + 
     7 +#[derive(Error, Debug)]
     8 +pub enum BuildError {
     9 + #[error("invalid metadata: {0}")]
     10 + InvalidMetadata(#[from] metadata::resolved::error::Error),
     11 + #[error("unable to build schema: {0}")]
     12 + UnableToBuildSchema(#[from] schema::Error),
     13 + #[error("unable to encode schema: {0}")]
     14 + EncodingError(#[from] bincode::Error),
     15 +}
     16 + 
     17 +pub fn build_schema(metadata: &str) -> Result<gql_schema::Schema<GDS>, BuildError> {
     18 + let gds = schema::GDS::new(metadata)?;
     19 + Ok(gds.build_schema()?)
     20 +}
     21 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/error.rs
     1 +use gql::{ast::common as ast, http::GraphQLError};
     2 +use lang_graphql as gql;
     3 +use open_dds::{
     4 + relationships::RelationshipName,
     5 + session_variables::SessionVariable,
     6 + types::{CustomTypeName, FieldName},
     7 +};
     8 +use reqwest::StatusCode;
     9 +use serde_json as json;
     10 +use thiserror::Error;
     11 +use tracing_util::{ErrorVisibility, TraceableError};
     12 +use transitive::Transitive;
     13 + 
     14 +use crate::metadata::resolved::subgraph::Qualified;
     15 + 
     16 +use super::types::Annotation;
     17 + 
     18 +#[derive(Error, Debug)]
     19 +pub enum InternalDeveloperError {
     20 + #[error("no source data connector specified for field {field_name} of type {type_name}")]
     21 + NoSourceDataConnector {
     22 + type_name: ast::TypeName,
     23 + field_name: ast::Name,
     24 + },
     25 + 
     26 + #[error("No argument source specified for argument {argument_name} of field {field_name}")]
     27 + NoArgumentSource {
     28 + field_name: ast::Name,
     29 + argument_name: ast::Name,
     30 + },
     31 + 
     32 + #[error("Required session variable not found in the request: {session_variable}")]
     33 + MissingSessionVariable { session_variable: SessionVariable },
     34 + 
     35 + #[error("Unable to typecast session variable. Expected: {expected:}, but found: {found:}")]
     36 + VariableTypeCast { expected: String, found: String },
     37 + 
     38 + #[error("Typecasting to array is not supported.")]
     39 + VariableArrayTypeCast,
     40 + 
     41 + #[error("Mapping for the Global ID typename {type_name:} not found")]
     42 + GlobalIDTypenameMappingNotFound { type_name: ast::TypeName },
     43 + 
     44 + #[error("Type mapping not found for the type name {type_name:} while executing the relationship {relationship_name:}")]
     45 + TypeMappingNotFoundForRelationship {
     46 + type_name: Qualified<CustomTypeName>,
     47 + relationship_name: RelationshipName,
     48 + },
     49 + 
     50 + #[error("Field mapping not found for the field {field_name:} of type {type_name:} while executing the relationship {relationship_name:}")]
     51 + FieldMappingNotFoundForRelationship {
     52 + type_name: Qualified<CustomTypeName>,
     53 + relationship_name: RelationshipName,
     54 + field_name: FieldName,
     55 + },
     56 + 
     57 + #[error("{}", render_ndc_error(.0))]
     58 + GDCClientError(ndc_client::apis::Error),
     59 + 
     60 + #[error("unexpected response from data connector: {summary}")]
     61 + BadGDCResponse { summary: String },
     62 +}
     63 + 
     64 +#[derive(Error, Debug)]
     65 +pub enum InternalEngineError {
     66 + #[error("introspection error: {0}")]
     67 + IntrospectionError(#[from] gql::introspection::Error),
     68 + 
     69 + #[error("serialization error: {0}")]
     70 + SerializationError(#[from] json::Error),
     71 + 
     72 + #[error("IR conversion error: {0}")]
     73 + IRConversionError(#[from] gql::normalized_ast::Error),
     74 + 
     75 + #[error("unexpected annotation: {annotation}")]
     76 + UnexpectedAnnotation { annotation: Annotation },
     77 + 
     78 + #[error("subscription shouldn't have been validated")]
     79 + SubscriptionsNotSupported,
     80 + 
     81 + #[error("Mapping for source column {source_column} already exists in the relationship {relationship_name}")]
     82 + MappingExistsInRelationship {
     83 + source_column: FieldName,
     84 + relationship_name: RelationshipName,
     85 + },
     86 + 
     87 + #[error("remote relationships should have been handled separately")]
     88 + RemoteRelationshipsAreNotSupported,
     89 + 
     90 + #[error("expected filter predicate but filter predicate namespaced annotation not found")]
     91 + FilterPermissionAnnotationNotFound,
     92 + 
     93 + #[error("internal error: {description}")]
     94 + InternalGeneric { description: String },
     95 +}
     96 + 
     97 +#[derive(Error, Debug, Transitive)]
     98 +#[transitive(from(json::Error, InternalEngineError))]
     99 +#[transitive(from(gql::normalized_ast::Error, InternalEngineError))]
     100 +#[transitive(from(gql::introspection::Error, InternalEngineError))]
     101 +pub enum InternalError {
     102 + #[error("{0}")]
     103 + Developer(#[from] InternalDeveloperError),
     104 + #[error("{0}")]
     105 + Engine(#[from] InternalEngineError),
     106 +}
     107 + 
     108 +impl InternalError {
     109 + fn get_details(&self) -> Option<serde_json::Value> {
     110 + match self {
     111 + Self::Developer(InternalDeveloperError::GDCClientError(
     112 + ndc_client::apis::Error::ConnectorError(ce),
     113 + )) => Some(ce.error_response.details.clone()),
     114 + _ => None,
     115 + }
     116 + }
     117 +}
     118 + 
     119 +#[derive(Error, Debug, Transitive)]
     120 +#[transitive(from(json::Error, InternalError))]
     121 +#[transitive(from(gql::normalized_ast::Error, InternalError))]
     122 +#[transitive(from(gql::introspection::Error, InternalError))]
     123 +#[transitive(from(InternalEngineError, InternalError))]
     124 +#[transitive(from(InternalDeveloperError, InternalError))]
     125 +pub enum Error {
     126 + #[error("parsing failed: {0}")]
     127 + ParseFailure(#[from] gql::ast::spanning::Positioned<gql::parser::Error>),
     128 + #[error("validation failed: {0}")]
     129 + ValidationFailed(#[from] gql::validation::Error),
     130 + 
     131 + #[error("The global ID {encoded_value:} couldn't be decoded due to {decoding_error:}")]
     132 + ErrorInDecodingGlobalId {
     133 + encoded_value: String,
     134 + decoding_error: String,
     135 + },
     136 + 
     137 + #[error("'{name:}' is not a valid GraphQL name.")]
     138 + TypeFieldInvalidGraphQlName { name: String },
     139 + 
     140 + #[error("ndc: {}", connector_error.error_response.message)]
     141 + NDCExpected {
     142 + connector_error: ndc_client::apis::ConnectorError,
     143 + },
     144 + #[error("{0}")]
     145 + InternalError(#[from] InternalError),
     146 +}
     147 + 
     148 +impl Error {
     149 + fn get_details(&self) -> Option<serde_json::Value> {
     150 + match self {
     151 + Error::InternalError(internal) => internal.get_details(),
     152 + Error::NDCExpected { connector_error } => {
     153 + Some(connector_error.error_response.details.clone())
     154 + }
     155 + _ => None,
     156 + }
     157 + }
     158 + 
     159 + pub fn to_graphql_error(self, path: Option<Vec<gql::http::PathSegment>>) -> GraphQLError {
     160 + let details = self.get_details();
     161 + match self {
     162 + Error::InternalError(_internal) => GraphQLError {
     163 + message: "internal error".into(),
     164 + path,
     165 + extensions: None, // Internal errors showing up in the API response is not desirable. Hence, extensions are masked for internal errors
     166 + },
     167 + e => GraphQLError {
     168 + message: e.to_string(),
     169 + path,
     170 + extensions: details.map(|details| gql::http::Extensions { details }),
     171 + },
     172 + }
     173 + }
     174 +}
     175 + 
     176 +// Convert NDC errors
     177 +impl From<ndc_client::apis::Error> for Error {
     178 + fn from(ndc_error: ndc_client::apis::Error) -> Error {
     179 + if let ndc_client::apis::Error::ConnectorError(err) = &ndc_error {
     180 + if matches!(
     181 + err.status,
     182 + StatusCode::OK | StatusCode::FORBIDDEN | StatusCode::CONFLICT
     183 + ) {
     184 + return Error::NDCExpected {
     185 + connector_error: err.clone(),
     186 + };
     187 + }
     188 + }
     189 + Error::InternalError(InternalError::Developer(
     190 + InternalDeveloperError::GDCClientError(ndc_error),
     191 + ))
     192 + }
     193 +}
     194 + 
     195 +fn render_ndc_error(error: &ndc_client::apis::Error) -> String {
     196 + match error {
     197 + ndc_client::apis::Error::Reqwest(err) => match err.status() {
     198 + Some(code) => format!("request to connector failed with status code {0}", code),
     199 + None => format!("request to connector failed: {}", err),
     200 + },
     201 + ndc_client::apis::Error::Serde(err) => {
     202 + format!("unable to decode JSON response from connector: {0}", err)
     203 + }
     204 + ndc_client::apis::Error::Io(_err) => "internal IO error".into(),
     205 + ndc_client::apis::Error::ConnectorError(err) => format!(
     206 + "connector returned status code {0} with message: {1}",
     207 + err.status, err.error_response.message,
     208 + ),
     209 + ndc_client::apis::Error::InvalidConnectorUrl(err) => {
     210 + format!("invalid connector url: {err}")
     211 + }
     212 + }
     213 +}
     214 + 
     215 +impl TraceableError for Error {
     216 + fn visibility(&self) -> ErrorVisibility {
     217 + match self {
     218 + Error::InternalError(internal) => match internal {
     219 + InternalError::Developer(_) => ErrorVisibility::User,
     220 + InternalError::Engine(_) => ErrorVisibility::Internal,
     221 + },
     222 + _ => ErrorVisibility::User,
     223 + }
     224 + }
     225 +}
     226 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/global_id.rs
     1 +use lang_graphql::ast::common::Alias;
     2 +use open_dds::types::FieldName;
     3 + 
     4 +const GLOBAL_ID_NDC_PREFIX: &str = "hasura_global_id_col";
     5 +pub const GLOBAL_ID_VERSION: u16 = 1;
     6 + 
     7 +pub fn global_id_col_format(alias: &Alias, field_name: &FieldName) -> String {
     8 + format!(
     9 + "{}_{}_{}",
     10 + GLOBAL_ID_NDC_PREFIX.to_owned(),
     11 + alias,
     12 + &field_name.0
     13 + )
     14 +}
     15 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/arguments.rs
     1 +use std::collections::{BTreeMap, HashMap};
     2 + 
     3 +use lang_graphql::ast::common::Name;
     4 +use lang_graphql::normalized_ast::{InputField, Value};
     5 +use open_dds::types::{CustomTypeName, InbuiltType};
     6 + 
     7 +use crate::execute::error;
     8 +use crate::metadata::resolved::subgraph::{
     9 + Qualified, QualifiedBaseType, QualifiedTypeName, QualifiedTypeReference,
     10 +};
     11 +use crate::metadata::resolved::types::TypeMapping;
     12 +use crate::schema::types::{Annotation, InputAnnotation, ModelInputAnnotation};
     13 +use crate::schema::GDS;
     14 + 
     15 +pub fn build_ndc_command_arguments(
     16 + command_field: &Name,
     17 + argument: &InputField<GDS>,
     18 + command_type_mappings: &BTreeMap<Qualified<CustomTypeName>, TypeMapping>,
     19 +) -> Result<HashMap<String, serde_json::Value>, error::Error> {
     20 + let mut ndc_arguments = HashMap::new();
     21 + 
     22 + match argument.info.generic {
     23 + Annotation::Input(InputAnnotation::CommandArgument {
     24 + argument_type,
     25 + ndc_func_proc_argument,
     26 + }) => {
     27 + let ndc_func_proc_argument = ndc_func_proc_argument.clone().ok_or_else(|| {
     28 + error::InternalDeveloperError::NoArgumentSource {
     29 + field_name: command_field.clone(),
     30 + argument_name: argument.name.clone(),
     31 + }
     32 + })?;
     33 + let mapped_argument_value = map_argument_value_to_ndc_type(
     34 + argument_type,
     35 + &argument.value,
     36 + command_type_mappings,
     37 + )?;
     38 + ndc_arguments.insert(ndc_func_proc_argument, mapped_argument_value);
     39 + Ok(())
     40 + }
     41 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     42 + annotation: annotation.clone(),
     43 + }),
     44 + }?;
     45 + Ok(ndc_arguments)
     46 +}
     47 + 
     48 +pub fn build_ndc_model_arguments<'a, TInputFieldIter: Iterator<Item = &'a InputField<'a, GDS>>>(
     49 + model_operation_field: &Name,
     50 + arguments: TInputFieldIter,
     51 + model_type_mappings: &BTreeMap<Qualified<CustomTypeName>, TypeMapping>,
     52 +) -> Result<BTreeMap<String, ndc_client::models::Argument>, error::Error> {
     53 + let mut ndc_arguments = BTreeMap::new();
     54 + for argument in arguments {
     55 + match argument.info.generic {
     56 + Annotation::Input(InputAnnotation::Model(ModelInputAnnotation::ModelArgument {
     57 + argument_type,
     58 + ndc_table_argument,
     59 + })) => {
     60 + let ndc_table_argument = ndc_table_argument.clone().ok_or_else(|| {
     61 + error::InternalDeveloperError::NoArgumentSource {
     62 + field_name: model_operation_field.clone(),
     63 + argument_name: argument.name.clone(),
     64 + }
     65 + })?;
     66 + let mapped_argument_value = map_argument_value_to_ndc_type(
     67 + argument_type,
     68 + &argument.value,
     69 + model_type_mappings,
     70 + )?;
     71 + ndc_arguments.insert(
     72 + ndc_table_argument,
     73 + ndc_client::models::Argument::Literal {
     74 + value: mapped_argument_value,
     75 + },
     76 + );
     77 + }
     78 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     79 + annotation: annotation.clone(),
     80 + })?,
     81 + }
     82 + }
     83 + Ok(ndc_arguments)
     84 +}
     85 + 
     86 +fn map_argument_value_to_ndc_type(
     87 + value_type: &QualifiedTypeReference,
     88 + value: &Value<GDS>,
     89 + type_mappings: &BTreeMap<Qualified<CustomTypeName>, TypeMapping>,
     90 +) -> Result<serde_json::Value, error::Error> {
     91 + if value.is_null() {
     92 + return Ok(serde_json::Value::Null);
     93 + }
     94 + 
     95 + match &value_type.underlying_type {
     96 + QualifiedBaseType::List(element_type) => {
     97 + let mapped_elements = value
     98 + .as_list()?
     99 + .iter()
     100 + .map(|element_value| {
     101 + map_argument_value_to_ndc_type(element_type, element_value, type_mappings)
     102 + })
     103 + .collect::<Result<Vec<_>, _>>()?;
     104 + Ok(serde_json::Value::from(mapped_elements))
     105 + }
     106 + QualifiedBaseType::Named(QualifiedTypeName::Inbuilt(InbuiltType::String)) => {
     107 + Ok(serde_json::Value::from(value.as_string()?))
     108 + }
     109 + QualifiedBaseType::Named(QualifiedTypeName::Inbuilt(InbuiltType::Float)) => {
     110 + Ok(serde_json::Value::from(value.as_float()?))
     111 + }
     112 + QualifiedBaseType::Named(QualifiedTypeName::Inbuilt(InbuiltType::Int)) => {
     113 + Ok(serde_json::Value::from(value.as_int_i64()?))
     114 + }
     115 + QualifiedBaseType::Named(QualifiedTypeName::Inbuilt(InbuiltType::ID)) => {
     116 + Ok(serde_json::Value::from(value.as_id()?))
     117 + }
     118 + QualifiedBaseType::Named(QualifiedTypeName::Inbuilt(InbuiltType::Boolean)) => {
     119 + Ok(serde_json::Value::from(value.as_boolean()?))
     120 + }
     121 + QualifiedBaseType::Named(QualifiedTypeName::Custom(custom_type_name)) => {
     122 + match type_mappings.get(custom_type_name) {
     123 + // If the custom type is a scalar or object but opaque on the NDC side, there won't be a mapping,
     124 + // in which case, pass it as-is.
     125 + None => Ok(value.as_json()),
     126 + Some(TypeMapping::Object { field_mappings }) => {
     127 + let object_value = value.as_object()?;
     128 + let mapped_fields = object_value
     129 + .iter()
     130 + .map(|(_gql_field_name, field_value)| {
     131 + let (field_name, field_type) = match field_value.info.generic {
     132 + Annotation::Input(InputAnnotation::InputObjectField {
     133 + field_name,
     134 + field_type,
     135 + }) => Ok((field_name, field_type)),
     136 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     137 + annotation: annotation.clone(),
     138 + }),
     139 + }?;
     140 + 
     141 + let field_mapping = field_mappings.get(field_name).ok_or_else(|| {
     142 + error::InternalEngineError::InternalGeneric {
     143 + description: format!("unable to find mapping for field {field_name:}"),
     144 + }
     145 + })?;
     146 + 
     147 + let mapped_field_value = map_argument_value_to_ndc_type(
     148 + field_type,
     149 + &field_value.value,
     150 + type_mappings,
     151 + )?;
     152 + Ok((field_mapping.column.to_string(), mapped_field_value))
     153 + })
     154 + .collect::<Result<serde_json::Map<String, serde_json::Value>, error::Error>>()?;
     155 + 
     156 + Ok(serde_json::Value::Object(mapped_fields))
     157 + }
     158 + }
     159 + }
     160 + }
     161 +}
     162 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/commands.rs
     1 +//! IR and execution logic for commands
     2 +//!
     3 +//! A 'command' executes a function/procedure and returns back the result of the execution.
     4 + 
     5 +use hasura_authn_core::SessionVariables;
     6 +use lang_graphql::ast::common as ast;
     7 +use lang_graphql::ast::common::TypeContainer;
     8 +use lang_graphql::ast::common::TypeName;
     9 +use lang_graphql::normalized_ast;
     10 +use ndc_client as gdc;
     11 +use open_dds::commands::{self, DataConnectorCommand};
     12 +use serde::Serialize;
     13 +use serde_json as json;
     14 +use std::collections::BTreeMap;
     15 + 
     16 +use super::arguments;
     17 +use super::selection_set;
     18 +use crate::execute::error;
     19 +use crate::execute::model_tracking::{count_command, UsagesCounts};
     20 +use crate::execute::remote_joins::types::{JoinLocations, MonotonicCounter, RemoteJoin};
     21 +use crate::metadata::resolved;
     22 +use crate::metadata::resolved::subgraph;
     23 +use crate::schema::GDS;
     24 + 
     25 +/// IR for the 'command' operations
     26 +#[derive(Serialize, Debug)]
     27 +pub struct CommandRepresentation<'n, 's> {
     28 + /// The name of the command
     29 + pub command_name: subgraph::Qualified<commands::CommandName>,
     30 + 
     31 + /// The name of the field as published in the schema
     32 + pub field_name: ast::Name,
     33 + 
     34 + /// The data connector backing this model.
     35 + pub data_connector: resolved::data_connector::DataConnector,
     36 + 
     37 + /// Source function/procedure in the data connector for this model
     38 + pub ndc_source: DataConnectorCommand,
     39 + 
     40 + /// Arguments for the NDC table
     41 + pub(crate) arguments: BTreeMap<String, json::Value>,
     42 + 
     43 + /// IR for the command result selection set
     44 + pub(crate) selection: selection_set::ResultSelectionSet<'s>,
     45 + 
     46 + /// The Graphql base type for the output_type of command. Helps in deciding how
     47 + /// the response from the NDC needs to be processed.
     48 + pub type_container: &'n TypeContainer<TypeName>,
     49 + 
     50 + // All the models/commands used in the 'command' operation.
     51 + pub(crate) usage_counts: UsagesCounts,
     52 +}
     53 + 
     54 +/// Generates the IR for a 'command' operation
     55 +#[allow(irrefutable_let_patterns)]
     56 +pub(crate) fn command_generate_ir<'n, 's>(
     57 + command_name: &subgraph::Qualified<commands::CommandName>,
     58 + field: &'n normalized_ast::Field<'s, GDS>,
     59 + field_call: &'s normalized_ast::FieldCall<'s, GDS>,
     60 + underlying_object_typename: &Option<subgraph::Qualified<open_dds::types::CustomTypeName>>,
     61 + command_source: &'s resolved::command::CommandSource,
     62 + session_variables: &SessionVariables,
     63 +) -> Result<CommandRepresentation<'n, 's>, error::Error> {
     64 + let empty_field_mappings = BTreeMap::new();
     65 + // No field mappings should exists if the resolved output type of command is
     66 + // not a custom object type
     67 + let field_mappings = match underlying_object_typename {
     68 + None => &empty_field_mappings,
     69 + Some(typename) => command_source
     70 + .type_mappings
     71 + .get(typename)
     72 + .and_then(|type_mapping| {
     73 + if let resolved::types::TypeMapping::Object { field_mappings } = type_mapping {
     74 + Some(field_mappings)
     75 + } else {
     76 + None
     77 + }
     78 + })
     79 + .ok_or_else(|| error::InternalEngineError::InternalGeneric {
     80 + description: format!(
     81 + "type '{}' not found in command source type_mappings",
     82 + typename
     83 + ),
     84 + })?,
     85 + };
     86 + 
     87 + let mut command_arguments = BTreeMap::new();
     88 + for argument in field_call.arguments.values() {
     89 + command_arguments.extend(
     90 + arguments::build_ndc_command_arguments(
     91 + &field_call.name,
     92 + argument,
     93 + &command_source.type_mappings,
     94 + )?
     95 + .into_iter(),
     96 + );
     97 + }
     98 + 
     99 + // Add the name of the root command
     100 + let mut usage_counts = UsagesCounts::new();
     101 + count_command(command_name.clone(), &mut usage_counts);
     102 + 
     103 + let selection = selection_set::generate_selection_set_ir(
     104 + &field.selection_set,
     105 + &command_source.data_connector,
     106 + &command_source.type_mappings,
     107 + field_mappings,
     108 + session_variables,
     109 + &mut usage_counts,
     110 + )?;
     111 + 
     112 + Ok(CommandRepresentation {
     113 + command_name: command_name.clone(),
     114 + field_name: field_call.name.clone(),
     115 + data_connector: command_source.data_connector.clone(),
     116 + ndc_source: command_source.source.clone(),
     117 + arguments: command_arguments,
     118 + selection,
     119 + type_container: &field.type_container,
     120 + // selection_set: &field.selection_set,
     121 + usage_counts,
     122 + })
     123 +}
     124 + 
     125 +pub fn ir_to_ndc_query_ir<'s>(
     126 + function_name: &String,
     127 + ir: &CommandRepresentation<'_, 's>,
     128 + join_id_counter: &mut MonotonicCounter,
     129 +) -> Result<(gdc::models::QueryRequest, JoinLocations<RemoteJoin<'s>>), error::Error> {
     130 + let (ndc_fields, jl) = selection_set::process_selection_set_ir(&ir.selection, join_id_counter)?;
     131 + let query = gdc::models::Query {
     132 + aggregates: None,
     133 + fields: Some(ndc_fields),
     134 + limit: None,
     135 + offset: None,
     136 + order_by: None,
     137 + predicate: None,
     138 + };
     139 + let mut collection_relationships = BTreeMap::new();
     140 + selection_set::collect_relationships(&ir.selection, &mut collection_relationships)?;
     141 + let arguments: BTreeMap<String, gdc::models::Argument> = ir
     142 + .arguments
     143 + .iter()
     144 + .map(|(k, v)| {
     145 + (
     146 + k.clone(),
     147 + gdc::models::Argument::Literal { value: v.clone() },
     148 + )
     149 + })
     150 + .collect();
     151 + let query_request = gdc::models::QueryRequest {
     152 + query,
     153 + collection: function_name.to_string(),
     154 + arguments,
     155 + collection_relationships,
     156 + variables: None,
     157 + };
     158 + Ok((query_request, jl))
     159 +}
     160 + 
     161 +pub fn ir_to_ndc_mutation_ir<'s>(
     162 + procedure_name: &String,
     163 + ir: &CommandRepresentation<'_, 's>,
     164 + join_id_counter: &mut MonotonicCounter,
     165 +) -> Result<(gdc::models::MutationRequest, JoinLocations<RemoteJoin<'s>>), error::Error> {
     166 + let arguments = ir
     167 + .arguments
     168 + .iter()
     169 + .map(|(k, v)| (k.clone(), v.clone()))
     170 + .collect::<BTreeMap<String, serde_json::Value>>();
     171 + 
     172 + let (ndc_fields, jl) = selection_set::process_selection_set_ir(&ir.selection, join_id_counter)?;
     173 + let mutation_operation = gdc::models::MutationOperation::Procedure {
     174 + name: procedure_name.to_string(),
     175 + arguments,
     176 + fields: Some(ndc_fields),
     177 + };
     178 + let mut collection_relationships = BTreeMap::new();
     179 + selection_set::collect_relationships(&ir.selection, &mut collection_relationships)?;
     180 + let mutation_request = gdc::models::MutationRequest {
     181 + operations: vec![mutation_operation],
     182 + collection_relationships,
     183 + };
     184 + Ok((mutation_request, jl))
     185 +}
     186 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/filter.rs
     1 +use indexmap::IndexMap;
     2 +use lang_graphql::ast::common as ast;
     3 +use lang_graphql::normalized_ast;
     4 +use ndc_client as gdc;
     5 + 
     6 +use crate::execute::error;
     7 +use crate::schema::types;
     8 +use crate::schema::types::{InputAnnotation, ModelInputAnnotation};
     9 +use crate::schema::GDS;
     10 + 
     11 +/// Generates the IR for GraphQL 'where' boolean expression
     12 +pub(crate) fn resolve_filter_expression(
     13 + fields: &IndexMap<ast::Name, normalized_ast::InputField<'_, GDS>>,
     14 +) -> Result<Vec<gdc::models::Expression>, error::Error> {
     15 + let mut expressions = Vec::new();
     16 + for (_field_name, field) in fields {
     17 + match field.info.generic {
     18 + // "_and"
     19 + types::Annotation::Input(InputAnnotation::Model(
     20 + ModelInputAnnotation::ModelFilterArgument {
     21 + field: types::ModelFilterArgument::AndOp,
     22 + },
     23 + )) => {
     24 + let values = field.value.as_list()?;
     25 + let expression = gdc::models::Expression::And {
     26 + expressions: values
     27 + .iter()
     28 + .map(|value| {
     29 + Ok(gdc::models::Expression::And {
     30 + expressions: resolve_filter_expression(value.as_object()?)?,
     31 + })
     32 + })
     33 + .collect::<Result<Vec<gdc::models::Expression>, error::Error>>()?,
     34 + };
     35 + expressions.push(expression);
     36 + }
     37 + // "_or"
     38 + types::Annotation::Input(InputAnnotation::Model(
     39 + ModelInputAnnotation::ModelFilterArgument {
     40 + field: types::ModelFilterArgument::OrOp,
     41 + },
     42 + )) => {
     43 + let values = field.value.as_list()?;
     44 + let expression = gdc::models::Expression::Or {
     45 + expressions: values
     46 + .iter()
     47 + .map(|value| {
     48 + Ok(gdc::models::Expression::And {
     49 + expressions: resolve_filter_expression(value.as_object()?)?,
     50 + })
     51 + })
     52 + .collect::<Result<Vec<gdc::models::Expression>, error::Error>>()?,
     53 + };
     54 + expressions.push(expression);
     55 + }
     56 + // "_not"
     57 + types::Annotation::Input(InputAnnotation::Model(
     58 + ModelInputAnnotation::ModelFilterArgument {
     59 + field: types::ModelFilterArgument::NotOp,
     60 + },
     61 + )) => {
     62 + let value = field.value.as_object()?;
     63 + expressions.push(gdc::models::Expression::Not {
     64 + expression: Box::new(gdc::models::Expression::And {
     65 + expressions: resolve_filter_expression(value)?,
     66 + }),
     67 + })
     68 + }
     69 + types::Annotation::Input(InputAnnotation::Model(
     70 + ModelInputAnnotation::ModelFilterArgument {
     71 + field: types::ModelFilterArgument::Field { ndc_column: column },
     72 + },
     73 + )) => {
     74 + for (op_name, op_value) in field.value.as_object()? {
     75 + let expression = match op_name.as_str() {
     76 + "_eq" => build_binary_comparison_expression(
     77 + gdc::models::BinaryComparisonOperator::Equal,
     78 + column.clone(),
     79 + &op_value.value,
     80 + ),
     81 + "_is_null" => build_is_null_expression(column.clone(), &op_value.value)?,
     82 + other => {
     83 + let operator = gdc::models::BinaryComparisonOperator::Other {
     84 + name: other.to_string(),
     85 + };
     86 + build_binary_comparison_expression(
     87 + operator,
     88 + column.clone(),
     89 + &op_value.value,
     90 + )
     91 + }
     92 + };
     93 + expressions.push(expression)
     94 + }
     95 + }
     96 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     97 + annotation: annotation.clone(),
     98 + })?,
     99 + }
     100 + }
     101 + Ok(expressions)
     102 +}
     103 + 
     104 +/// Generate a binary comparison operator
     105 +fn build_binary_comparison_expression(
     106 + operator: gdc::models::BinaryComparisonOperator,
     107 + column: String,
     108 + value: &normalized_ast::Value<'_, GDS>,
     109 +) -> gdc::models::Expression {
     110 + gdc::models::Expression::BinaryComparisonOperator {
     111 + column: gdc::models::ComparisonTarget::Column {
     112 + name: column,
     113 + path: vec![],
     114 + },
     115 + operator,
     116 + value: gdc::models::ComparisonValue::Scalar {
     117 + value: value.as_json(),
     118 + },
     119 + }
     120 +}
     121 + 
     122 +/// Resolve `_is_null` GraphQL boolean operator
     123 +fn build_is_null_expression(
     124 + column: String,
     125 + value: &normalized_ast::Value<'_, GDS>,
     126 +) -> Result<gdc::models::Expression, error::Error> {
     127 + // Build an 'IsNull' unary comparison expression
     128 + let unary_comparison_expression = gdc::models::Expression::UnaryComparisonOperator {
     129 + column: gdc::models::ComparisonTarget::Column {
     130 + name: column,
     131 + path: vec![],
     132 + },
     133 + operator: gdc::models::UnaryComparisonOperator::IsNull,
     134 + };
     135 + // Get `_is_null` input value as boolean
     136 + let is_null = value.as_boolean()?;
     137 + if is_null {
     138 + // When _is_null: true. Just return 'IsNull' unary comparison expression.
     139 + Ok(unary_comparison_expression)
     140 + } else {
     141 + // When _is_null: false. Return negated 'IsNull' unary comparison expression by wrapping it in 'Not'.
     142 + Ok(gdc::models::Expression::Not {
     143 + expression: Box::new(unary_comparison_expression),
     144 + })
     145 + }
     146 +}
     147 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/model_selection.rs
     1 +//! IR for the 'model_selection' type - selecting fields from a model
     2 + 
     3 +use hasura_authn_core::SessionVariables;
     4 +use lang_graphql::normalized_ast;
     5 +use ndc_client as ndc;
     6 +use open_dds::types::CustomTypeName;
     7 +use serde::Serialize;
     8 +use std::collections::BTreeMap;
     9 + 
     10 +use super::permissions;
     11 +use super::selection_set;
     12 +use crate::execute::error;
     13 +use crate::execute::model_tracking::UsagesCounts;
     14 +use crate::execute::remote_joins::types::{JoinLocations, MonotonicCounter, RemoteJoin};
     15 +use crate::metadata::resolved;
     16 +use crate::metadata::resolved::subgraph::Qualified;
     17 +use crate::schema::GDS;
     18 + 
     19 +/// IR fragment for any 'select' operation on a model
     20 +#[derive(Debug, Serialize)]
     21 +pub struct ModelSelection<'s> {
     22 + // The data connector backing this model.
     23 + pub data_connector: &'s resolved::data_connector::DataConnector,
     24 + 
     25 + // Source collection in the data connector for this model
     26 + pub(crate) collection: &'s String,
     27 + 
     28 + // Arguments for the NDC collection
     29 + pub(crate) arguments: BTreeMap<String, ndc::models::Argument>,
     30 + 
     31 + // The boolean expression that would fetch a single row from this model
     32 + pub(crate) filter_clause: Vec<ndc::models::Expression>,
     33 + 
     34 + // Limit
     35 + pub(crate) limit: Option<u32>,
     36 + 
     37 + // Offset
     38 + pub(crate) offset: Option<u32>,
     39 + 
     40 + // Order by
     41 + pub(crate) order_by: Option<ndc::models::OrderBy>,
     42 + 
     43 + // Fields requested from the model
     44 + pub(crate) selection: selection_set::ResultSelectionSet<'s>,
     45 +}
     46 + 
     47 +/// Generates the IR fragment for selecting from a model.
     48 +#[allow(clippy::too_many_arguments)]
     49 +pub(crate) fn model_selection_ir<'s>(
     50 + selection_set: &normalized_ast::SelectionSet<'s, GDS>,
     51 + data_type: &Qualified<CustomTypeName>,
     52 + model_source: &'s resolved::model::ModelSource,
     53 + arguments: BTreeMap<String, ndc::models::Argument>,
     54 + mut filter_clauses: Vec<ndc::models::Expression>,
     55 + permissions_predicate: &resolved::model::FilterPermission,
     56 + limit: Option<u32>,
     57 + offset: Option<u32>,
     58 + order_by: Option<ndc::models::OrderBy>,
     59 + session_variables: &SessionVariables,
     60 + usage_counts: &mut UsagesCounts,
     61 +) -> Result<ModelSelection<'s>, error::Error> {
     62 + match permissions_predicate {
     63 + resolved::model::FilterPermission::AllowAll => {}
     64 + resolved::model::FilterPermission::Filter(predicate) => {
     65 + filter_clauses.push(permissions::process_model_predicate(
     66 + predicate,
     67 + session_variables,
     68 + )?);
     69 + }
     70 + };
     71 + let field_mappings = model_source
     72 + .type_mappings
     73 + .get(data_type)
     74 + .map(|type_mapping| {
     75 + let resolved::types::TypeMapping::Object { field_mappings } = type_mapping;
     76 + field_mappings
     77 + })
     78 + .ok_or_else(|| error::InternalEngineError::InternalGeneric {
     79 + description: format!("type '{data_type}' not found in model source type_mappings"),
     80 + })?;
     81 + let selection = selection_set::generate_selection_set_ir(
     82 + selection_set,
     83 + &model_source.data_connector,
     84 + &model_source.type_mappings,
     85 + field_mappings,
     86 + session_variables,
     87 + usage_counts,
     88 + )?;
     89 + 
     90 + Ok(ModelSelection {
     91 + data_connector: &model_source.data_connector,
     92 + collection: &model_source.collection,
     93 + arguments,
     94 + filter_clause: filter_clauses,
     95 + limit,
     96 + offset,
     97 + order_by,
     98 + selection,
     99 + })
     100 +}
     101 + 
     102 +pub(crate) fn ir_to_ndc_query<'s>(
     103 + ir: &ModelSelection<'s>,
     104 + join_id_counter: &mut MonotonicCounter,
     105 +) -> Result<(ndc::models::Query, JoinLocations<RemoteJoin<'s>>), error::Error> {
     106 + let (ndc_fields, join_locations) =
     107 + selection_set::process_selection_set_ir(&ir.selection, join_id_counter)?;
     108 + let ndc_query = ndc::models::Query {
     109 + aggregates: None,
     110 + fields: Some(ndc_fields),
     111 + limit: ir.limit,
     112 + offset: ir.offset,
     113 + order_by: ir.order_by.clone(),
     114 + predicate: match ir.filter_clause.as_slice() {
     115 + [] => None,
     116 + [expression] => Some(expression.clone()),
     117 + expressions => Some(ndc::models::Expression::And {
     118 + expressions: expressions.to_vec(),
     119 + }),
     120 + },
     121 + };
     122 + Ok((ndc_query, join_locations))
     123 +}
     124 + 
     125 +/// Convert the internal IR (`ModelSelection`) into NDC IR (`ndc::models::QueryRequest`)
     126 +pub fn ir_to_ndc_ir<'s>(
     127 + ir: &ModelSelection<'s>,
     128 + join_id_counter: &mut MonotonicCounter,
     129 +) -> Result<(ndc::models::QueryRequest, JoinLocations<RemoteJoin<'s>>), error::Error> {
     130 + let mut collection_relationships = BTreeMap::new();
     131 + selection_set::collect_relationships(&ir.selection, &mut collection_relationships)?;
     132 + let (query, join_locations) = ir_to_ndc_query(ir, join_id_counter)?;
     133 + let query_request = ndc::models::QueryRequest {
     134 + query,
     135 + collection: ir.collection.clone(),
     136 + arguments: ir.arguments.clone(),
     137 + collection_relationships,
     138 + variables: None,
     139 + };
     140 + Ok((query_request, join_locations))
     141 +}
     142 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/mutation_root.rs
     1 +//! IR of the mutation root type
     2 + 
     3 +use hasura_authn_core::SessionVariables;
     4 +use indexmap::IndexMap;
     5 +use lang_graphql as gql;
     6 +use lang_graphql::ast::common as ast;
     7 +use tracing_util::SpanVisibility;
     8 + 
     9 +use super::{commands, root_field};
     10 +use crate::execute::error;
     11 +use crate::schema::types::{Annotation, OutputAnnotation, RootFieldAnnotation};
     12 +use crate::schema::GDS;
     13 + 
     14 +/// Generates IR for the selection set of type 'mutation root'
     15 +pub fn generate_ir<'n, 's>(
     16 + selection_set: &'s gql::normalized_ast::SelectionSet<'s, GDS>,
     17 + session_variables: &SessionVariables,
     18 +) -> Result<IndexMap<ast::Alias, root_field::RootField<'n, 's>>, error::Error> {
     19 + let tracer = tracing_util::global_tracer();
     20 + tracer.in_span("generate_ir", SpanVisibility::Internal, || {
     21 + let type_name = selection_set
     22 + .type_name
     23 + .clone()
     24 + .ok_or_else(|| gql::normalized_ast::Error::NoTypenameFound)?;
     25 + let mut root_fields = IndexMap::new();
     26 + for (alias, field) in &selection_set.fields {
     27 + let field_call = field.field_call()?;
     28 + let field_response = match field_call.name.as_str() {
     29 + "__typename" => Ok(root_field::MutationRootField::TypeName {
     30 + type_name: type_name.clone(),
     31 + }),
     32 + _ => match field_call.info.generic {
     33 + Annotation::Output(OutputAnnotation::RootField(
     34 + RootFieldAnnotation::Command {
     35 + name,
     36 + underlying_object_typename,
     37 + source,
     38 + },
     39 + )) => {
     40 + let source = source.as_ref().ok_or_else(|| {
     41 + error::InternalDeveloperError::NoSourceDataConnector {
     42 + type_name: type_name.clone(),
     43 + field_name: field_call.name.clone(),
     44 + }
     45 + })?;
     46 + Ok(root_field::MutationRootField::CommandRepresentation {
     47 + selection_set: &field.selection_set,
     48 + ir: commands::command_generate_ir(
     49 + name,
     50 + field,
     51 + field_call,
     52 + underlying_object_typename,
     53 + source,
     54 + session_variables,
     55 + )?,
     56 + })
     57 + }
     58 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     59 + annotation: annotation.clone(),
     60 + }),
     61 + },
     62 + }?;
     63 + root_fields.insert(
     64 + alias.clone(),
     65 + root_field::RootField::MutationRootField(field_response),
     66 + );
     67 + }
     68 + Ok(root_fields)
     69 + })
     70 +}
     71 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/order_by.rs
     1 +use lang_graphql::normalized_ast::{self as normalized_ast, InputField};
     2 +use ndc_client as gdc;
     3 + 
     4 +use crate::schema::types::{Annotation, InputAnnotation, ModelInputAnnotation};
     5 + 
     6 +use crate::execute::error;
     7 +use crate::schema::types;
     8 +use crate::schema::GDS;
     9 + 
     10 +pub fn build_ndc_order_by(
     11 + args_field: &InputField<GDS>,
     12 +) -> Result<gdc::models::OrderBy, error::Error> {
     13 + match &args_field.value {
     14 + normalized_ast::Value::Object(arguments) => {
     15 + let mut ndc_order_elements = Vec::new();
     16 + for (_name, argument) in arguments {
     17 + match argument.info.generic {
     18 + Annotation::Input(InputAnnotation::Model(
     19 + types::ModelInputAnnotation::ModelOrderByArgument { ndc_column },
     20 + )) => {
     21 + let order_by_value = argument.value.as_enum()?;
     22 + let order_direction = match &order_by_value.info.generic {
     23 + Annotation::Input(InputAnnotation::Model(
     24 + ModelInputAnnotation::ModelOrderByDirection { direction },
     25 + )) => match &direction {
     26 + types::ModelOrderByDirection::Asc => {
     27 + gdc::models::OrderDirection::Asc
     28 + }
     29 + types::ModelOrderByDirection::Desc => {
     30 + gdc::models::OrderDirection::Desc
     31 + }
     32 + },
     33 + &annotation => {
     34 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     35 + annotation: annotation.clone(),
     36 + })?
     37 + }
     38 + };
     39 + 
     40 + let order_element = gdc::models::OrderByElement {
     41 + order_direction,
     42 + target: gdc::models::OrderByTarget::Column {
     43 + name: ndc_column.clone(),
     44 + path: Vec::new(),
     45 + },
     46 + };
     47 + 
     48 + ndc_order_elements.push(order_element);
     49 + }
     50 + annotation => {
     51 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     52 + annotation: annotation.clone(),
     53 + })?;
     54 + }
     55 + }
     56 + }
     57 + Ok(gdc::models::OrderBy {
     58 + elements: ndc_order_elements,
     59 + })
     60 + }
     61 + _ => Err(error::InternalEngineError::InternalGeneric {
     62 + description: "Expected object value for model arguments".into(),
     63 + })?,
     64 + }
     65 +}
     66 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/permissions.rs
     1 +use hasura_authn_core::{SessionVariableValue, SessionVariables};
     2 +use lang_graphql::normalized_ast;
     3 +use ndc_client as gdc;
     4 +use open_dds::{permissions::ValueExpression, types::InbuiltType};
     5 + 
     6 +use crate::execute::error::{Error, InternalDeveloperError, InternalEngineError, InternalError};
     7 +use crate::metadata::resolved;
     8 +use crate::metadata::resolved::subgraph::{
     9 + QualifiedBaseType, QualifiedTypeName, QualifiedTypeReference,
     10 +};
     11 +use crate::schema::types;
     12 +use crate::schema::GDS;
     13 + 
     14 +/// Fetch filter expression from the namespace annotation
     15 +/// of the field call. If the filter predicate namespace annotation
     16 +/// is not found, then an error will be thrown.
     17 +pub(crate) fn get_select_filter_predicate<'s>(
     18 + field_call: &normalized_ast::FieldCall<'s, GDS>,
     19 +) -> Result<&'s resolved::model::FilterPermission, Error> {
     20 + field_call
     21 + .info
     22 + .namespaced
     23 + .as_ref()
     24 + .map(|annotation| match annotation {
     25 + types::NamespaceAnnotation::Filter(predicate) => predicate,
     26 + })
     27 + // If we're hitting this case, it means that the caller of this
     28 + // function expects a filter predicate, but it was not annotated
     29 + // when the V3 engine metadata was built
     30 + .ok_or(Error::InternalError(InternalError::Engine(
     31 + InternalEngineError::FilterPermissionAnnotationNotFound,
     32 + )))
     33 +}
     34 + 
     35 +pub(crate) fn process_model_predicate(
     36 + model_predicate: &resolved::model::ModelPredicate,
     37 + session_variables: &SessionVariables,
     38 +) -> Result<gdc::models::Expression, Error> {
     39 + match model_predicate {
     40 + resolved::model::ModelPredicate::UnaryFieldComparison {
     41 + field: _,
     42 + ndc_column,
     43 + operator,
     44 + } => Ok(make_permission_unary_boolean_expression(
     45 + ndc_column.clone(),
     46 + operator,
     47 + )?),
     48 + resolved::model::ModelPredicate::BinaryFieldComparison {
     49 + field: _,
     50 + ndc_column,
     51 + argument_type,
     52 + operator,
     53 + value,
     54 + } => Ok(make_permission_binary_boolean_expression(
     55 + ndc_column.clone(),
     56 + argument_type,
     57 + operator,
     58 + value,
     59 + session_variables,
     60 + )?),
     61 + resolved::model::ModelPredicate::Not(predicate) => {
     62 + let expr = process_model_predicate(predicate, session_variables)?;
     63 + Ok(gdc::models::Expression::Not {
     64 + expression: Box::new(expr),
     65 + })
     66 + }
     67 + resolved::model::ModelPredicate::And(predicates) => {
     68 + let exprs = predicates
     69 + .iter()
     70 + .map(|p| process_model_predicate(p, session_variables))
     71 + .collect::<Result<Vec<_>, Error>>()?;
     72 + Ok(gdc::models::Expression::And { expressions: exprs })
     73 + }
     74 + resolved::model::ModelPredicate::Or(predicates) => {
     75 + let exprs = predicates
     76 + .iter()
     77 + .map(|p| process_model_predicate(p, session_variables))
     78 + .collect::<Result<Vec<_>, Error>>()?;
     79 + Ok(gdc::models::Expression::Or { expressions: exprs })
     80 + }
     81 + // TODO: implement this
     82 + // TODO: naveen: When we can use models in predicates, make sure to
     83 + // include those models in the 'models_used' field of the IR's. This is
     84 + // for tracking the models used in query.
     85 + resolved::model::ModelPredicate::Relationship {
     86 + name: _,
     87 + predicate: _,
     88 + } => Err(InternalEngineError::InternalGeneric {
     89 + description: "'relationship' model predicate is not supported yet.".to_string(),
     90 + })?,
     91 + }
     92 +}
     93 + 
     94 +fn make_permission_binary_boolean_expression(
     95 + ndc_column: String,
     96 + argument_type: &QualifiedTypeReference,
     97 + operator: &ndc_client::models::BinaryComparisonOperator,
     98 + value_expression: &ValueExpression,
     99 + session_variables: &SessionVariables,
     100 +) -> Result<gdc::models::Expression, Error> {
     101 + let ndc_expression_value =
     102 + make_value_from_value_expression(value_expression, argument_type, session_variables)?;
     103 + Ok(gdc::models::Expression::BinaryComparisonOperator {
     104 + column: gdc::models::ComparisonTarget::Column {
     105 + name: ndc_column,
     106 + path: vec![],
     107 + },
     108 + operator: operator.clone(),
     109 + value: ndc_expression_value,
     110 + })
     111 +}
     112 + 
     113 +fn make_permission_unary_boolean_expression(
     114 + ndc_column: String,
     115 + operator: &ndc_client::models::UnaryComparisonOperator,
     116 +) -> Result<gdc::models::Expression, Error> {
     117 + Ok(gdc::models::Expression::UnaryComparisonOperator {
     118 + column: gdc::models::ComparisonTarget::Column {
     119 + name: ndc_column,
     120 + path: vec![],
     121 + },
     122 + operator: *operator,
     123 + })
     124 +}
     125 + 
     126 +fn make_value_from_value_expression(
     127 + val_expr: &ValueExpression,
     128 + field_type: &QualifiedTypeReference,
     129 + session_variables: &SessionVariables,
     130 +) -> Result<gdc::models::ComparisonValue, Error> {
     131 + match val_expr {
     132 + ValueExpression::Literal(val) => {
     133 + Ok(gdc::models::ComparisonValue::Scalar { value: val.clone() })
     134 + }
     135 + ValueExpression::SessionVariable(session_var) => {
     136 + let value = session_variables.get(session_var).ok_or_else(|| {
     137 + InternalDeveloperError::MissingSessionVariable {
     138 + session_variable: session_var.clone(),
     139 + }
     140 + })?;
     141 + 
     142 + Ok(gdc::models::ComparisonValue::Scalar {
     143 + value: typecast_session_variable(value, field_type)?,
     144 + })
     145 + }
     146 + }
     147 +}
     148 + 
     149 +/// Typecast a stringified session variable into a given type, but as a serde_json::Value
     150 +fn typecast_session_variable(
     151 + session_var_value_wrapped: &SessionVariableValue,
     152 + to_type: &QualifiedTypeReference,
     153 +) -> Result<serde_json::Value, Error> {
     154 + let session_var_value = &session_var_value_wrapped.0;
     155 + match &to_type.underlying_type {
     156 + QualifiedBaseType::Named(type_name) => {
     157 + match type_name {
     158 + QualifiedTypeName::Inbuilt(primitive) => match primitive {
     159 + InbuiltType::Int => {
     160 + let value: i32 = session_var_value.parse().map_err(|_| {
     161 + InternalDeveloperError::VariableTypeCast {
     162 + expected: "int".into(),
     163 + found: session_var_value.clone(),
     164 + }
     165 + })?;
     166 + Ok(serde_json::Value::Number(value.into()))
     167 + }
     168 + InbuiltType::Float => {
     169 + let value: f32 = session_var_value.parse().map_err(|_| {
     170 + InternalDeveloperError::VariableTypeCast {
     171 + expected: "float".into(),
     172 + found: session_var_value.clone(),
     173 + }
     174 + })?;
     175 + Ok(serde_json::to_value(value)?)
     176 + }
     177 + InbuiltType::Boolean => match session_var_value.as_str() {
     178 + "true" => Ok(serde_json::Value::Bool(true)),
     179 + "false" => Ok(serde_json::Value::Bool(false)),
     180 + _ => Err(InternalDeveloperError::VariableTypeCast {
     181 + expected: "true or false".into(),
     182 + found: session_var_value.clone(),
     183 + })?,
     184 + },
     185 + InbuiltType::String => Ok(serde_json::to_value(session_var_value)?),
     186 + InbuiltType::ID => Ok(serde_json::to_value(session_var_value)?),
     187 + },
     188 + // TODO: Currently, we don't support `representation` for `NewTypes`, so custom_type is a blackbox with only
     189 + // name present. We need to add support for `representation` and so we can use it to typecast custom new_types
     190 + // based on the definition in their representation.
     191 + QualifiedTypeName::Custom(_type_name) => {
     192 + let value_result = serde_json::from_str(session_var_value);
     193 + match value_result {
     194 + Ok(value) => Ok(value),
     195 + // If the session variable value doesn't match any JSON value, we treat the variable as a string (e.g.
     196 + // session variables of string type aren't typically quoted)
     197 + Err(_) => Ok(serde_json::to_value(session_var_value)?),
     198 + }
     199 + }
     200 + }
     201 + }
     202 + QualifiedBaseType::List(_) => Err(InternalDeveloperError::VariableArrayTypeCast)?,
     203 + }
     204 +}
     205 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/query_root/node_field.rs
     1 +//! IR of the relay according to https://relay.dev/graphql/objectidentification.htm
     2 + 
     3 +use base64::{engine::general_purpose, Engine};
     4 +use hasura_authn_core::SessionVariables;
     5 +use lang_graphql::{ast::common as ast, normalized_ast};
     6 +use ndc_client as ndc;
     7 +use open_dds::permissions::Role;
     8 +use serde::Serialize;
     9 +use std::collections::{BTreeMap, HashMap};
     10 + 
     11 +use crate::execute::error;
     12 +use crate::execute::ir::model_selection;
     13 +use crate::execute::model_tracking::UsagesCounts;
     14 +use crate::metadata::resolved;
     15 +use crate::schema::types::{GlobalID, NodeFieldTypeNameMapping};
     16 +use crate::schema::GDS;
     17 + 
     18 +/// IR for the 'select_one' operation on a model
     19 +#[derive(Serialize, Debug)]
     20 +pub struct NodeSelect<'n, 's> {
     21 + // The name of the field as published in the schema
     22 + pub field_name: &'n ast::Name,
     23 + 
     24 + /// Model Selection IR fragment
     25 + pub model_selection: model_selection::ModelSelection<'s>,
     26 + 
     27 + // We need this to post process the response for `__typename` fields and for
     28 + // validating the response from the data connector. This is not a reference
     29 + // as it is constructed from the original selection set by filtering fields
     30 + // that are relevant.
     31 + pub selection_set: normalized_ast::SelectionSet<'s, GDS>,
     32 + 
     33 + // All the models/commands used in this operation. This includes the models/commands
     34 + // used via relationships. And in future, the models/commands used in the filter clause
     35 + pub(crate) usage_counts: UsagesCounts,
     36 +}
     37 + 
     38 +/// Generate the NDC IR for the node root field.
     39 + 
     40 +/// This function, decodes the value of the `id`
     41 +/// argument and then looks the `typename` up in the
     42 +/// `typename_mappings`. A successful lookup will yield the
     43 +/// `data_specification::TypeName` and the `ModelSource`
     44 +/// associated with the typename and a Hashset of roles that
     45 +/// can access the Object coresponding to the type name.
     46 +/// If the role, doesn't have model select permissions
     47 +/// to the model that is the global ID source for the
     48 +/// object type that was decoded, then this function
     49 +/// returns `None`.
     50 +pub(crate) fn relay_node_ir<'n, 's>(
     51 + field: &'n normalized_ast::Field<'s, GDS>,
     52 + field_call: &'n normalized_ast::FieldCall<'s, GDS>,
     53 + typename_mappings: &'s HashMap<ast::TypeName, NodeFieldTypeNameMapping>,
     54 + role: &Role,
     55 + session_variables: &SessionVariables,
     56 +) -> Result<Option<NodeSelect<'n, 's>>, error::Error> {
     57 + let id_arg_value = field_call
     58 + .expected_argument(&lang_graphql::mk_name!("id"))?
     59 + .value
     60 + .as_id()?;
     61 + let decoded_id_value = general_purpose::STANDARD
     62 + .decode(id_arg_value.clone())
     63 + .map_err(|e| error::Error::ErrorInDecodingGlobalId {
     64 + encoded_value: id_arg_value.clone(),
     65 + decoding_error: e.to_string(),
     66 + })?;
     67 + let global_id: GlobalID = serde_json::from_slice(decoded_id_value.as_slice())?;
     68 + let typename_mapping = typename_mappings.get(&global_id.typename).ok_or(
     69 + error::InternalDeveloperError::GlobalIDTypenameMappingNotFound {
     70 + type_name: global_id.typename.clone(),
     71 + },
     72 + )?;
     73 + let role_model_select_permission = typename_mapping.model_select_permissions.get(role);
     74 + match role_model_select_permission {
     75 + // When a role doesn't have any model select permissions on the model
     76 + // that is the Global ID source for the object type, we just return `null`.
     77 + None => Ok(None),
     78 + Some(role_model_select_permission) => {
     79 + let model_source = typename_mapping.model_source.as_ref().ok_or(
     80 + error::InternalDeveloperError::NoSourceDataConnector {
     81 + type_name: global_id.typename.clone(),
     82 + field_name: lang_graphql::mk_name!("node"),
     83 + },
     84 + )?;
     85 + 
     86 + let field_mappings = model_source
     87 + .type_mappings
     88 + .get(&typename_mapping.type_name)
     89 + .map(|type_mapping| match type_mapping {
     90 + resolved::types::TypeMapping::Object { field_mappings } => field_mappings,
     91 + })
     92 + .ok_or_else(|| error::InternalEngineError::InternalGeneric {
     93 + description: format!(
     94 + "type '{}' not found in model source type_mappings",
     95 + typename_mapping.type_name
     96 + ),
     97 + })?;
     98 + let filter_clauses = global_id
     99 + .id
     100 + .iter()
     101 + .map(|(field_name, val)| {
     102 + let field_mapping = &field_mappings.get(field_name).ok_or_else(|| {
     103 + error::InternalEngineError::InternalGeneric {
     104 + description: format!("invalid field in annotation: {field_name:}"),
     105 + }
     106 + })?;
     107 + Ok(ndc::models::Expression::BinaryComparisonOperator {
     108 + column: ndc::models::ComparisonTarget::Column {
     109 + name: field_mapping.column.clone(),
     110 + path: vec![],
     111 + },
     112 + operator: ndc::models::BinaryComparisonOperator::Equal,
     113 + value: ndc::models::ComparisonValue::Scalar { value: val.clone() },
     114 + })
     115 + })
     116 + .collect::<Result<_, error::Error>>()?;
     117 + 
     118 + let new_selection_set = field
     119 + .selection_set
     120 + .filter_field_calls_by_typename(global_id.typename);
     121 + 
     122 + let mut usage_counts = UsagesCounts::new();
     123 + let model_selection = model_selection::model_selection_ir(
     124 + &new_selection_set,
     125 + &typename_mapping.type_name,
     126 + model_source,
     127 + BTreeMap::new(),
     128 + filter_clauses,
     129 + &role_model_select_permission.filter,
     130 + None, // limit
     131 + None, // offset
     132 + None, // order_by
     133 + session_variables,
     134 + // Get all the models/commands that were used as relationships
     135 + &mut usage_counts,
     136 + )?;
     137 + Ok(Some(NodeSelect {
     138 + field_name: &field_call.name,
     139 + model_selection,
     140 + selection_set: new_selection_set,
     141 + usage_counts,
     142 + }))
     143 + }
     144 + }
     145 +}
     146 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/query_root/select_many.rs
     1 +//! model_source IR for 'select_many' operation
     2 +//!
     3 +//! A 'select_many' operation fetches zero or one row from a model
     4 + 
     5 +/// Generates the IR for a 'select_many' operation
     6 +use hasura_authn_core::SessionVariables;
     7 +use lang_graphql::ast::common as ast;
     8 +use lang_graphql::normalized_ast;
     9 +use open_dds;
     10 +use serde::Serialize;
     11 +use std::collections::BTreeMap;
     12 + 
     13 +use super::error;
     14 +use crate::execute::ir::arguments;
     15 +use crate::execute::ir::filter;
     16 +use crate::execute::ir::model_selection;
     17 +use crate::execute::ir::order_by::build_ndc_order_by;
     18 +use crate::execute::ir::permissions;
     19 +use crate::execute::model_tracking::{count_model, UsagesCounts};
     20 +use crate::metadata::resolved;
     21 +use crate::metadata::resolved::subgraph::Qualified;
     22 +use crate::schema::types::{self, Annotation, ModelInputAnnotation};
     23 +use crate::schema::GDS;
     24 + 
     25 +/// IR for the 'select_many' operation on a model
     26 +#[derive(Debug, Serialize)]
     27 +pub struct ModelSelectMany<'n, 's> {
     28 + // The name of the field as published in the schema
     29 + pub field_name: ast::Name,
     30 + 
     31 + pub model_selection: model_selection::ModelSelection<'s>,
     32 + 
     33 + // The Graphql output type of the operation
     34 + pub(crate) type_container: &'n ast::TypeContainer<ast::TypeName>,
     35 + 
     36 + // All the models/commands used in this operation. This includes the models/commands
     37 + // used via relationships. And in future, the models/commands used in the filter clause
     38 + pub(crate) usage_counts: UsagesCounts,
     39 +}
     40 +/// Generates the IR for a 'select_many' operation
     41 +#[allow(irrefutable_let_patterns)]
     42 +pub(crate) fn select_many_generate_ir<'n, 's>(
     43 + field: &'n normalized_ast::Field<'s, GDS>,
     44 + field_call: &'n normalized_ast::FieldCall<'s, GDS>,
     45 + data_type: &Qualified<open_dds::types::CustomTypeName>,
     46 + model_source: &'s resolved::model::ModelSource,
     47 + session_variables: &SessionVariables,
     48 + model_name: &'s Qualified<open_dds::models::ModelName>,
     49 +) -> Result<ModelSelectMany<'n, 's>, error::Error> {
     50 + let mut limit = None;
     51 + let mut offset = None;
     52 + let mut filter_clause = Vec::new();
     53 + let mut order_by = None;
     54 + let mut model_arguments = BTreeMap::new();
     55 + 
     56 + for argument in field_call.arguments.values() {
     57 + match argument.info.generic {
     58 + annotation @ Annotation::Input(types::InputAnnotation::Model(
     59 + model_argument_annotation,
     60 + )) => match model_argument_annotation {
     61 + ModelInputAnnotation::ModelLimitArgument => {
     62 + limit = Some(argument.value.as_int_u32()?)
     63 + }
     64 + ModelInputAnnotation::ModelOffsetArgument => {
     65 + offset = Some(argument.value.as_int_u32()?)
     66 + }
     67 + ModelInputAnnotation::ModelFilterExpression => {
     68 + filter_clause = filter::resolve_filter_expression(argument.value.as_object()?)?
     69 + }
     70 + ModelInputAnnotation::ModelArgumentsExpression => match &argument.value {
     71 + normalized_ast::Value::Object(arguments) => {
     72 + model_arguments.extend(
     73 + arguments::build_ndc_model_arguments(
     74 + &field_call.name,
     75 + arguments.values(),
     76 + &model_source.type_mappings,
     77 + )?
     78 + .into_iter(),
     79 + );
     80 + }
     81 + _ => Err(error::InternalEngineError::InternalGeneric {
     82 + description: "Expected object value for model arguments".into(),
     83 + })?,
     84 + },
     85 + ModelInputAnnotation::ModelOrderByExpression => {
     86 + order_by = Some(build_ndc_order_by(argument)?)
     87 + }
     88 + _ => {
     89 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     90 + annotation: annotation.clone(),
     91 + })?
     92 + }
     93 + },
     94 + 
     95 + annotation => {
     96 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     97 + annotation: annotation.clone(),
     98 + })?
     99 + }
     100 + }
     101 + }
     102 + 
     103 + // Add the name of the root model
     104 + let mut usage_counts = UsagesCounts::new();
     105 + count_model(model_name.clone(), &mut usage_counts);
     106 + 
     107 + let model_selection = model_selection::model_selection_ir(
     108 + &field.selection_set,
     109 + data_type,
     110 + model_source,
     111 + model_arguments,
     112 + filter_clause,
     113 + permissions::get_select_filter_predicate(field_call)?,
     114 + limit,
     115 + offset,
     116 + order_by,
     117 + session_variables,
     118 + // Get all the models/commands that were used as relationships
     119 + &mut usage_counts,
     120 + )?;
     121 + 
     122 + Ok(ModelSelectMany {
     123 + field_name: field_call.name.clone(),
     124 + model_selection,
     125 + type_container: &field.type_container,
     126 + usage_counts,
     127 + })
     128 +}
     129 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/query_root/select_one.rs
     1 +//! model_source IR for 'select_one' operation
     2 +//!
     3 +//! A 'select_one' operation fetches zero or one row from a model
     4 + 
     5 +/// Generates the IR for a 'select_one' operation
     6 +// TODO: Remove once TypeMapping has more than one variant
     7 +use hasura_authn_core::SessionVariables;
     8 +use lang_graphql::{ast::common as ast, normalized_ast};
     9 +use ndc_client as ndc;
     10 +use open_dds;
     11 +use serde::Serialize;
     12 + 
     13 +use super::error;
     14 +use crate::execute::ir::arguments;
     15 +use crate::execute::ir::model_selection;
     16 +use crate::execute::ir::permissions;
     17 +use crate::execute::model_tracking::{count_model, UsagesCounts};
     18 +use crate::metadata::resolved;
     19 +use crate::metadata::resolved::subgraph::Qualified;
     20 +use crate::schema::types::{self, Annotation, ModelInputAnnotation};
     21 +use crate::schema::GDS;
     22 + 
     23 +/// IR for the 'select_one' operation on a model
     24 +#[derive(Serialize, Debug)]
     25 +pub struct ModelSelectOne<'n, 's> {
     26 + // The name of the field as published in the schema
     27 + pub field_name: ast::Name,
     28 + 
     29 + pub model_selection: model_selection::ModelSelection<'s>,
     30 + 
     31 + // The Graphql output type of the operation
     32 + pub(crate) type_container: &'n ast::TypeContainer<ast::TypeName>,
     33 + 
     34 + // All the models/commands used in this operation. This includes the models/commands
     35 + // used via relationships. And in future, the models/commands used in the filter clause
     36 + pub(crate) usage_counts: UsagesCounts,
     37 +}
     38 + 
     39 +#[allow(irrefutable_let_patterns)]
     40 +pub(crate) fn select_one_generate_ir<'n, 's>(
     41 + field: &'n normalized_ast::Field<'s, GDS>,
     42 + field_call: &'s normalized_ast::FieldCall<'s, GDS>,
     43 + data_type: &Qualified<open_dds::types::CustomTypeName>,
     44 + model_source: &'s resolved::model::ModelSource,
     45 + session_variables: &SessionVariables,
     46 + model_name: &'s Qualified<open_dds::models::ModelName>,
     47 +) -> Result<ModelSelectOne<'n, 's>, error::Error> {
     48 + let field_mappings = model_source
     49 + .type_mappings
     50 + .get(data_type)
     51 + .and_then(|type_mapping| {
     52 + if let resolved::types::TypeMapping::Object { field_mappings } = type_mapping {
     53 + Some(field_mappings)
     54 + } else {
     55 + None
     56 + }
     57 + })
     58 + .ok_or_else(|| error::InternalEngineError::InternalGeneric {
     59 + description: format!("type '{:}' not found in source type_mappings", data_type),
     60 + })?;
     61 + 
     62 + let mut filter_clause = vec![];
     63 + let mut model_argument_fields = Vec::new();
     64 + for argument in field_call.arguments.values() {
     65 + match argument.info.generic {
     66 + annotation @ Annotation::Input(types::InputAnnotation::Model(
     67 + model_input_argument_annotation,
     68 + )) => match model_input_argument_annotation {
     69 + ModelInputAnnotation::ModelArgument { .. } => {
     70 + model_argument_fields.push(argument);
     71 + }
     72 + ModelInputAnnotation::ModelUniqueIdentifierArgument { field_name } => {
     73 + let field_mapping = &field_mappings.get(field_name).ok_or_else(|| {
     74 + error::InternalEngineError::InternalGeneric {
     75 + description: format!(
     76 + "invalid unique identifier field in annotation: {field_name:}"
     77 + ),
     78 + }
     79 + })?;
     80 + let ndc_expression = ndc::models::Expression::BinaryComparisonOperator {
     81 + column: ndc::models::ComparisonTarget::Column {
     82 + name: field_mapping.column.clone(),
     83 + path: vec![],
     84 + },
     85 + operator: ndc::models::BinaryComparisonOperator::Equal,
     86 + value: ndc::models::ComparisonValue::Scalar {
     87 + value: argument.value.as_json(),
     88 + },
     89 + };
     90 + filter_clause.push(ndc_expression);
     91 + }
     92 + _ => Err(error::InternalEngineError::UnexpectedAnnotation {
     93 + annotation: annotation.clone(),
     94 + })?,
     95 + },
     96 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     97 + annotation: annotation.clone(),
     98 + })?,
     99 + }
     100 + }
     101 + let model_arguments = arguments::build_ndc_model_arguments(
     102 + &field_call.name,
     103 + model_argument_fields.into_iter(),
     104 + &model_source.type_mappings,
     105 + )?;
     106 + 
     107 + // Add the name of the root model
     108 + let mut usage_counts = UsagesCounts::new();
     109 + count_model(model_name.clone(), &mut usage_counts);
     110 + 
     111 + let model_selection = model_selection::model_selection_ir(
     112 + &field.selection_set,
     113 + data_type,
     114 + model_source,
     115 + model_arguments,
     116 + filter_clause,
     117 + permissions::get_select_filter_predicate(field_call)?,
     118 + None, // limit
     119 + None, // offset
     120 + None, // order_by
     121 + session_variables,
     122 + // Get all the models/commands that were used as relationships
     123 + &mut usage_counts,
     124 + )?;
     125 + 
     126 + Ok(ModelSelectOne {
     127 + field_name: field_call.name.clone(),
     128 + model_selection,
     129 + type_container: &field.type_container,
     130 + usage_counts,
     131 + })
     132 +}
     133 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/query_root.rs
     1 +//! IR of the query root type
     2 + 
     3 +use hasura_authn_core::{Session, SessionVariables};
     4 +use indexmap::IndexMap;
     5 +use lang_graphql as gql;
     6 +use lang_graphql::ast::common as ast;
     7 +use open_dds::{commands::CommandName, models, types::CustomTypeName};
     8 + 
     9 +use std::collections::HashMap;
     10 + 
     11 +use super::commands;
     12 +use super::root_field;
     13 +use crate::execute::error;
     14 +use crate::metadata::resolved::{self, subgraph};
     15 +use crate::schema::types::RootFieldKind;
     16 +use crate::schema::types::{
     17 + Annotation, NodeFieldTypeNameMapping, OutputAnnotation, RootFieldAnnotation,
     18 +};
     19 +use crate::schema::{mk_typename, GDS};
     20 + 
     21 +pub mod node_field;
     22 +pub mod select_many;
     23 +pub mod select_one;
     24 + 
     25 +/// Generates IR for the selection set of type 'query root'
     26 +pub fn generate_ir<'n, 's>(
     27 + schema: &'s gql::schema::Schema<GDS>,
     28 + session: &Session,
     29 + selection_set: &'s gql::normalized_ast::SelectionSet<'s, GDS>,
     30 +) -> Result<IndexMap<ast::Alias, root_field::RootField<'n, 's>>, error::Error> {
     31 + let type_name = selection_set
     32 + .type_name
     33 + .clone()
     34 + .ok_or_else(|| gql::normalized_ast::Error::NoTypenameFound)?;
     35 + let mut ir = IndexMap::new();
     36 + for (alias, field) in selection_set.fields.iter() {
     37 + let field_call = field.field_call()?;
     38 + let field_ir = match field_call.name.as_str() {
     39 + "__typename" => Ok(root_field::QueryRootField::TypeName {
     40 + type_name: type_name.clone(),
     41 + }),
     42 + "__schema" => Ok(root_field::QueryRootField::SchemaField {
     43 + role: session.role.clone(),
     44 + selection_set: &field.selection_set,
     45 + schema,
     46 + }),
     47 + "__type" => {
     48 + let ir = generate_type_field_ir(schema, &field.selection_set, field_call, session)?;
     49 + Ok(ir)
     50 + }
     51 + _ => match field_call.info.generic {
     52 + annotation @ Annotation::Output(field_annotation) => match field_annotation {
     53 + OutputAnnotation::RootField(root_field) => match root_field {
     54 + RootFieldAnnotation::Model {
     55 + data_type,
     56 + source,
     57 + kind,
     58 + name: model_name,
     59 + } => {
     60 + let ir = generate_model_rootfield_ir(
     61 + &type_name, source, data_type, kind, field, field_call, session,
     62 + model_name,
     63 + )?;
     64 + Ok(ir)
     65 + }
     66 + RootFieldAnnotation::Command {
     67 + name,
     68 + underlying_object_typename,
     69 + source,
     70 + } => {
     71 + let ir = generate_command_rootfield_ir(
     72 + name,
     73 + &type_name,
     74 + source,
     75 + underlying_object_typename,
     76 + field,
     77 + field_call,
     78 + &session.variables,
     79 + )?;
     80 + Ok(ir)
     81 + }
     82 + RootFieldAnnotation::RelayNode { typename_mappings } => {
     83 + let ir = generate_nodefield_ir(
     84 + field,
     85 + field_call,
     86 + typename_mappings,
     87 + session,
     88 + )?;
     89 + Ok(ir)
     90 + }
     91 + _ => Err(error::Error::from(
     92 + error::InternalEngineError::UnexpectedAnnotation {
     93 + annotation: annotation.clone(),
     94 + },
     95 + )),
     96 + },
     97 + _ => Err(error::Error::from(
     98 + error::InternalEngineError::UnexpectedAnnotation {
     99 + annotation: annotation.clone(),
     100 + },
     101 + )),
     102 + },
     103 + annotation => Err(error::Error::from(
     104 + error::InternalEngineError::UnexpectedAnnotation {
     105 + annotation: annotation.clone(),
     106 + },
     107 + )),
     108 + },
     109 + }?;
     110 + ir.insert(
     111 + alias.clone(),
     112 + root_field::RootField::QueryRootField(field_ir),
     113 + );
     114 + }
     115 + Ok(ir)
     116 +}
     117 + 
     118 +fn generate_type_field_ir<'n, 's>(
     119 + schema: &'s gql::schema::Schema<GDS>,
     120 + selection_set: &'s gql::normalized_ast::SelectionSet<GDS>,
     121 + field_call: &gql::normalized_ast::FieldCall<GDS>,
     122 + session: &Session,
     123 +) -> Result<root_field::QueryRootField<'n, 's>, error::Error> {
     124 + let name = field_call
     125 + .expected_argument(&lang_graphql::mk_name!("name"))?
     126 + .value
     127 + .as_string()?;
     128 + let type_name = mk_typename(name).map_err(|_e| error::Error::TypeFieldInvalidGraphQlName {
     129 + name: name.to_string(),
     130 + })?;
     131 + Ok(root_field::QueryRootField::TypeField {
     132 + role: session.role.clone(),
     133 + selection_set,
     134 + schema,
     135 + type_name,
     136 + })
     137 +}
     138 + 
     139 +#[allow(clippy::too_many_arguments)]
     140 +fn generate_model_rootfield_ir<'n, 's>(
     141 + type_name: &ast::TypeName,
     142 + source: &'s Option<resolved::model::ModelSource>,
     143 + data_type: &subgraph::Qualified<CustomTypeName>,
     144 + kind: &RootFieldKind,
     145 + field: &'n gql::normalized_ast::Field<'s, GDS>,
     146 + field_call: &'s gql::normalized_ast::FieldCall<'s, GDS>,
     147 + session: &Session,
     148 + model_name: &'s subgraph::Qualified<models::ModelName>,
     149 +) -> Result<root_field::QueryRootField<'n, 's>, error::Error> {
     150 + let source =
     151 + source
     152 + .as_ref()
     153 + .ok_or_else(|| error::InternalDeveloperError::NoSourceDataConnector {
     154 + type_name: type_name.clone(),
     155 + field_name: field_call.name.clone(),
     156 + })?;
     157 + let ir = match kind {
     158 + RootFieldKind::SelectOne => root_field::QueryRootField::ModelSelectOne {
     159 + selection_set: &field.selection_set,
     160 + ir: select_one::select_one_generate_ir(
     161 + field,
     162 + field_call,
     163 + data_type,
     164 + source,
     165 + &session.variables,
     166 + model_name,
     167 + )?,
     168 + },
     169 + RootFieldKind::SelectMany => root_field::QueryRootField::ModelSelectMany {
     170 + selection_set: &field.selection_set,
     171 + ir: select_many::select_many_generate_ir(
     172 + field,
     173 + field_call,
     174 + data_type,
     175 + source,
     176 + &session.variables,
     177 + model_name,
     178 + )?,
     179 + },
     180 + };
     181 + Ok(ir)
     182 +}
     183 + 
     184 +fn generate_command_rootfield_ir<'n, 's>(
     185 + name: &'s subgraph::Qualified<CommandName>,
     186 + type_name: &ast::TypeName,
     187 + source: &'s Option<resolved::command::CommandSource>,
     188 + underlying_object_typename: &'s Option<subgraph::Qualified<CustomTypeName>>,
     189 + field: &'n gql::normalized_ast::Field<'s, GDS>,
     190 + field_call: &'s gql::normalized_ast::FieldCall<'s, GDS>,
     191 + session_variables: &SessionVariables,
     192 +) -> Result<root_field::QueryRootField<'n, 's>, error::Error> {
     193 + let source =
     194 + source
     195 + .as_ref()
     196 + .ok_or_else(|| error::InternalDeveloperError::NoSourceDataConnector {
     197 + type_name: type_name.clone(),
     198 + field_name: field_call.name.clone(),
     199 + })?;
     200 + let ir = root_field::QueryRootField::CommandRepresentation {
     201 + selection_set: &field.selection_set,
     202 + ir: commands::command_generate_ir(
     203 + name,
     204 + field,
     205 + field_call,
     206 + underlying_object_typename,
     207 + source,
     208 + session_variables,
     209 + )?,
     210 + };
     211 + Ok(ir)
     212 +}
     213 + 
     214 +fn generate_nodefield_ir<'n, 's>(
     215 + field: &'n gql::normalized_ast::Field<'s, GDS>,
     216 + field_call: &'n gql::normalized_ast::FieldCall<'s, GDS>,
     217 + typename_mappings: &'s HashMap<ast::TypeName, NodeFieldTypeNameMapping>,
     218 + session: &Session,
     219 +) -> Result<root_field::QueryRootField<'n, 's>, error::Error> {
     220 + let ir = root_field::QueryRootField::NodeSelect(node_field::relay_node_ir(
     221 + field,
     222 + field_call,
     223 + typename_mappings,
     224 + &session.role,
     225 + &session.variables,
     226 + )?);
     227 + Ok(ir)
     228 +}
     229 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/relationship.rs
     1 +use std::collections::BTreeMap;
     2 + 
     3 +use hasura_authn_core::SessionVariables;
     4 +use lang_graphql::normalized_ast::{self, Field};
     5 +use open_dds::{
     6 + relationships::{RelationshipName, RelationshipType},
     7 + types::{CustomTypeName, FieldName},
     8 +};
     9 + 
     10 +use ndc_client as ndc;
     11 +use serde::Serialize;
     12 + 
     13 +use super::filter::resolve_filter_expression;
     14 +use super::model_selection::model_selection_ir;
     15 +use super::order_by::build_ndc_order_by;
     16 +use super::permissions;
     17 +use super::selection_set::FieldSelection;
     18 +use crate::execute::error;
     19 +use crate::execute::model_tracking::{count_model, UsagesCounts};
     20 +use crate::metadata::resolved::subgraph::serialize_qualified_btreemap;
     21 +use crate::schema::types::output_type::relationship::{
     22 + ModelRelationshipAnnotation, ModelTargetSource,
     23 +};
     24 +use crate::{
     25 + metadata::resolved::{self, subgraph::Qualified},
     26 + schema::{
     27 + types::{Annotation, InputAnnotation, ModelInputAnnotation},
     28 + GDS,
     29 + },
     30 +};
     31 + 
     32 +#[derive(Debug, Serialize)]
     33 +pub(crate) struct RelationshipInfo<'s> {
     34 + pub annotation: &'s ModelRelationshipAnnotation,
     35 + pub source_data_connector: &'s resolved::data_connector::DataConnector,
     36 + #[serde(serialize_with = "serialize_qualified_btreemap")]
     37 + pub source_type_mappings: &'s BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     38 + pub target_source: &'s ModelTargetSource,
     39 +}
     40 + 
     41 +#[derive(Debug, Clone, Serialize)]
     42 +pub struct RemoteRelationshipInfo<'s> {
     43 + pub annotation: &'s ModelRelationshipAnnotation,
     44 + /// This contains processed information about the mappings.
     45 + /// `RelationshipMapping` only contains mapping of field names. This
     46 + /// contains mapping of field names and `resolved::types::FieldMapping`.
     47 + /// Also see `build_remote_relationship`.
     48 + pub join_mapping: Vec<(SourceField, TargetField)>,
     49 +}
     50 + 
     51 +pub type SourceField = (FieldName, resolved::types::FieldMapping);
     52 +pub type TargetField = (FieldName, resolved::types::FieldMapping);
     53 + 
     54 +pub(crate) fn process_relationship_definition(
     55 + relationship_info: &RelationshipInfo,
     56 +) -> Result<ndc::models::Relationship, error::Error> {
     57 + let &RelationshipInfo {
     58 + annotation,
     59 + source_data_connector,
     60 + source_type_mappings,
     61 + target_source,
     62 + } = relationship_info;
     63 + 
     64 + let mut column_mapping = BTreeMap::new();
     65 + for resolved::relationship::RelationshipModelMapping {
     66 + source_field: source_field_path,
     67 + target_field: target_field_path,
     68 + } in annotation.mappings.iter()
     69 + {
     70 + if !matches!(
     71 + relationship_execution_category(target_source, source_data_connector),
     72 + RelationshipExecutionCategory::Local
     73 + ) {
     74 + Err(error::InternalEngineError::RemoteRelationshipsAreNotSupported)?
     75 + } else {
     76 + let source_column = get_field_mapping_of_field_name(
     77 + source_type_mappings,
     78 + &annotation.source_type,
     79 + &annotation.relationship_name,
     80 + &source_field_path.field_name,
     81 + )?;
     82 + let target_column = get_field_mapping_of_field_name(
     83 + &target_source.model.type_mappings,
     84 + &annotation.target_type,
     85 + &annotation.relationship_name,
     86 + &target_field_path.field_name,
     87 + )?;
     88 + 
     89 + if column_mapping
     90 + .insert(source_column.column, target_column.column)
     91 + .is_some()
     92 + {
     93 + Err(error::InternalEngineError::MappingExistsInRelationship {
     94 + source_column: source_field_path.field_name.clone(),
     95 + relationship_name: annotation.relationship_name.clone(),
     96 + })?
     97 + }
     98 + }
     99 + }
     100 + let ndc_relationship = ndc_client::models::Relationship {
     101 + column_mapping,
     102 + relationship_type: {
     103 + match annotation.relationship_type {
     104 + RelationshipType::Object => ndc_client::models::RelationshipType::Object,
     105 + RelationshipType::Array => ndc_client::models::RelationshipType::Array,
     106 + }
     107 + },
     108 + target_collection: target_source.model.collection.to_string(),
     109 + arguments: BTreeMap::new(),
     110 + };
     111 + Ok(ndc_relationship)
     112 +}
     113 + 
     114 +enum RelationshipExecutionCategory {
     115 + // Push down relationship definition to the data connector
     116 + Local,
     117 + // Use foreach in the data connector to fetch related rows for multiple objects in a single request
     118 + RemoteForEach,
     119 +}
     120 + 
     121 +#[allow(clippy::match_single_binding)]
     122 +fn relationship_execution_category(
     123 + target_source: &ModelTargetSource,
     124 + source_connector: &resolved::data_connector::DataConnector,
     125 +) -> RelationshipExecutionCategory {
     126 + // It's a local relationship if the source and target connectors are the same and
     127 + // the connector supports relationships.
     128 + if target_source.model.data_connector.name == source_connector.name
     129 + && target_source.capabilities.relationships
     130 + {
     131 + RelationshipExecutionCategory::Local
     132 + } else {
     133 + match target_source.capabilities.foreach {
     134 + // TODO: When we support naive relationships for connectors not implementing foreach,
     135 + // add another match arm / return enum variant
     136 + () => RelationshipExecutionCategory::RemoteForEach,
     137 + }
     138 + }
     139 +}
     140 + 
     141 +pub(crate) fn generate_relationship_ir<'s>(
     142 + field: &Field<'s, GDS>,
     143 + annotation: &'s ModelRelationshipAnnotation,
     144 + data_connector: &'s resolved::data_connector::DataConnector,
     145 + type_mappings: &'s BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     146 + session_variables: &SessionVariables,
     147 + usage_counts: &mut UsagesCounts,
     148 +) -> Result<FieldSelection<'s>, error::Error> {
     149 + // Add the target model being used in the usage counts
     150 + count_model(annotation.model_name.clone(), usage_counts);
     151 + let field_call = field.field_call()?;
     152 + 
     153 + let mut limit = None;
     154 + let mut offset = None;
     155 + let mut filter_clause = Vec::new();
     156 + let mut order_by = None;
     157 + 
     158 + for argument in field_call.arguments.values() {
     159 + match argument.info.generic {
     160 + annotation @ Annotation::Input(argument_annotation) => match argument_annotation {
     161 + InputAnnotation::Model(model_argument_annotation) => {
     162 + match model_argument_annotation {
     163 + ModelInputAnnotation::ModelLimitArgument => {
     164 + limit = Some(argument.value.as_int_u32()?)
     165 + }
     166 + ModelInputAnnotation::ModelOffsetArgument => {
     167 + offset = Some(argument.value.as_int_u32()?)
     168 + }
     169 + ModelInputAnnotation::ModelFilterExpression => {
     170 + filter_clause = resolve_filter_expression(argument.value.as_object()?)?
     171 + }
     172 + ModelInputAnnotation::ModelOrderByExpression => {
     173 + order_by = Some(build_ndc_order_by(argument)?)
     174 + }
     175 + _ => {
     176 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     177 + annotation: annotation.clone(),
     178 + })?
     179 + }
     180 + }
     181 + }
     182 + _ => {
     183 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     184 + annotation: annotation.clone(),
     185 + })?
     186 + }
     187 + },
     188 + 
     189 + annotation => {
     190 + return Err(error::InternalEngineError::UnexpectedAnnotation {
     191 + annotation: annotation.clone(),
     192 + })?
     193 + }
     194 + }
     195 + }
     196 + 
     197 + let target_source =
     198 + annotation
     199 + .target_source
     200 + .as_ref()
     201 + .ok_or_else(|| match &field.selection_set.type_name {
     202 + Some(type_name) => {
     203 + error::Error::from(error::InternalDeveloperError::NoSourceDataConnector {
     204 + type_name: type_name.clone(),
     205 + field_name: field_call.name.clone(),
     206 + })
     207 + }
     208 + None => error::Error::from(normalized_ast::Error::NoTypenameFound),
     209 + })?;
     210 + match relationship_execution_category(target_source, data_connector) {
     211 + RelationshipExecutionCategory::Local => build_local_relationship(
     212 + field,
     213 + field_call,
     214 + annotation,
     215 + data_connector,
     216 + type_mappings,
     217 + target_source,
     218 + filter_clause,
     219 + limit,
     220 + offset,
     221 + order_by,
     222 + session_variables,
     223 + usage_counts,
     224 + ),
     225 + RelationshipExecutionCategory::RemoteForEach => build_remote_relationship(
     226 + field,
     227 + field_call,
     228 + annotation,
     229 + type_mappings,
     230 + target_source,
     231 + filter_clause,
     232 + limit,
     233 + offset,
     234 + order_by,
     235 + session_variables,
     236 + usage_counts,
     237 + ),
     238 + }
     239 +}
     240 + 
     241 +#[allow(clippy::too_many_arguments)]
     242 +pub(crate) fn build_local_relationship<'s>(
     243 + field: &normalized_ast::Field<'s, GDS>,
     244 + field_call: &normalized_ast::FieldCall<'s, GDS>,
     245 + annotation: &'s ModelRelationshipAnnotation,
     246 + data_connector: &'s resolved::data_connector::DataConnector,
     247 + type_mappings: &'s BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     248 + target_source: &'s ModelTargetSource,
     249 + filter_clause: Vec<ndc::models::Expression>,
     250 + limit: Option<u32>,
     251 + offset: Option<u32>,
     252 + order_by: Option<ndc::models::OrderBy>,
     253 + session_variables: &SessionVariables,
     254 + usage_counts: &mut UsagesCounts,
     255 +) -> Result<FieldSelection<'s>, error::Error> {
     256 + let relationships_ir = model_selection_ir(
     257 + &field.selection_set,
     258 + &annotation.target_type,
     259 + &target_source.model,
     260 + BTreeMap::new(),
     261 + filter_clause,
     262 + permissions::get_select_filter_predicate(field_call)?,
     263 + limit,
     264 + offset,
     265 + order_by,
     266 + session_variables,
     267 + usage_counts,
     268 + )?;
     269 + let rel_info = RelationshipInfo {
     270 + annotation,
     271 + source_data_connector: data_connector,
     272 + source_type_mappings: type_mappings,
     273 + target_source,
     274 + };
     275 + 
     276 + // Relationship names needs to be unique across the IR. This is so that, the
     277 + // NDC can use these names to figure out what joins to use.
     278 + // A single "source type" can have only one relationship with a given name,
     279 + // hence the relationship name in the IR is a tuple between the source type
     280 + // and the relationship name.
     281 + // Relationship name = (source_type, relationship_name)
     282 + let relationship_name =
     283 + serde_json::to_string(&(&annotation.source_type, &annotation.relationship_name))?;
     284 + 
     285 + Ok(FieldSelection::LocalRelationship {
     286 + query: relationships_ir,
     287 + name: relationship_name,
     288 + relationship_info: rel_info,
     289 + })
     290 +}
     291 + 
     292 +#[allow(clippy::too_many_arguments)]
     293 +pub(crate) fn build_remote_relationship<'n, 's>(
     294 + field: &'n normalized_ast::Field<'s, GDS>,
     295 + field_call: &'n normalized_ast::FieldCall<'s, GDS>,
     296 + annotation: &'s ModelRelationshipAnnotation,
     297 + type_mappings: &'s BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     298 + target_source: &'s ModelTargetSource,
     299 + filter_clause: Vec<ndc::models::Expression>,
     300 + limit: Option<u32>,
     301 + offset: Option<u32>,
     302 + order_by: Option<ndc::models::OrderBy>,
     303 + session_variables: &SessionVariables,
     304 + usage_counts: &mut UsagesCounts,
     305 +) -> Result<FieldSelection<'s>, error::Error> {
     306 + let mut join_mapping: Vec<(SourceField, TargetField)> = vec![];
     307 + for resolved::relationship::RelationshipModelMapping {
     308 + source_field: source_field_path,
     309 + target_field: target_field_path,
     310 + } in annotation.mappings.iter()
     311 + {
     312 + let source_column = get_field_mapping_of_field_name(
     313 + type_mappings,
     314 + &annotation.source_type,
     315 + &annotation.relationship_name,
     316 + &source_field_path.field_name,
     317 + )?;
     318 + let target_column = get_field_mapping_of_field_name(
     319 + &target_source.model.type_mappings,
     320 + &annotation.target_type,
     321 + &annotation.relationship_name,
     322 + &target_field_path.field_name,
     323 + )?;
     324 + 
     325 + let source_field = (source_field_path.field_name.clone(), source_column);
     326 + let target_field = (target_field_path.field_name.clone(), target_column);
     327 + join_mapping.push((source_field, target_field));
     328 + }
     329 + let mut remote_relationships_ir = model_selection_ir(
     330 + &field.selection_set,
     331 + &annotation.target_type,
     332 + &target_source.model,
     333 + BTreeMap::new(),
     334 + filter_clause,
     335 + permissions::get_select_filter_predicate(field_call)?,
     336 + limit,
     337 + offset,
     338 + order_by,
     339 + session_variables,
     340 + usage_counts,
     341 + )?;
     342 + 
     343 + // modify `ModelSelection` to include the join condition in `where` with a variable
     344 + for (_source, (_field_name, field)) in &join_mapping {
     345 + let target_value_variable = format!("${}", &field.column);
     346 + let comparison_exp = ndc::models::Expression::BinaryComparisonOperator {
     347 + column: ndc::models::ComparisonTarget::Column {
     348 + name: field.column.clone(),
     349 + path: vec![],
     350 + },
     351 + operator: ndc::models::BinaryComparisonOperator::Equal,
     352 + value: ndc::models::ComparisonValue::Variable {
     353 + name: target_value_variable,
     354 + },
     355 + };
     356 + remote_relationships_ir.filter_clause.push(comparison_exp);
     357 + }
     358 + let rel_info = RemoteRelationshipInfo {
     359 + annotation,
     360 + join_mapping,
     361 + };
     362 + Ok(FieldSelection::RemoteRelationship {
     363 + ir: remote_relationships_ir,
     364 + relationship_info: rel_info,
     365 + })
     366 +}
     367 + 
     368 +fn get_field_mapping_of_field_name(
     369 + type_mappings: &BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     370 + type_name: &Qualified<CustomTypeName>,
     371 + relationship_name: &RelationshipName,
     372 + field_name: &FieldName,
     373 +) -> Result<resolved::types::FieldMapping, error::Error> {
     374 + let type_mapping = type_mappings.get(type_name).ok_or_else(|| {
     375 + error::InternalDeveloperError::TypeMappingNotFoundForRelationship {
     376 + type_name: type_name.clone(),
     377 + relationship_name: relationship_name.clone(),
     378 + }
     379 + })?;
     380 + match type_mapping {
     381 + resolved::types::TypeMapping::Object { field_mappings } => Ok(field_mappings
     382 + .get(field_name)
     383 + .ok_or_else(
     384 + || error::InternalDeveloperError::FieldMappingNotFoundForRelationship {
     385 + type_name: type_name.clone(),
     386 + relationship_name: relationship_name.clone(),
     387 + field_name: field_name.clone(),
     388 + },
     389 + )?
     390 + .clone()),
     391 + }
     392 +}
     393 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/root_field.rs
     1 +/// IR of a root field
     2 +use lang_graphql as gql;
     3 +use lang_graphql::ast::common as ast;
     4 +use open_dds::permissions::Role;
     5 +use serde::Serialize;
     6 + 
     7 +use super::{
     8 + commands,
     9 + query_root::{node_field, select_many, select_one},
     10 +};
     11 +use crate::schema::GDS;
     12 + 
     13 +/// IR of a root field
     14 +#[derive(Serialize, Debug)]
     15 +pub enum RootField<'n, 's> {
     16 + QueryRootField(QueryRootField<'n, 's>),
     17 + MutationRootField(MutationRootField<'n, 's>),
     18 +}
     19 + 
     20 +/// IR of a query root field
     21 +#[derive(Serialize, Debug)]
     22 +pub enum QueryRootField<'n, 's> {
     23 + // __typename field on query root
     24 + TypeName {
     25 + type_name: ast::TypeName,
     26 + },
     27 + // __schema field
     28 + SchemaField {
     29 + role: Role,
     30 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     31 + schema: &'s gql::schema::Schema<GDS>,
     32 + },
     33 + // __type field
     34 + TypeField {
     35 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     36 + schema: &'s gql::schema::Schema<GDS>,
     37 + type_name: ast::TypeName,
     38 + role: Role,
     39 + },
     40 + // Operation that selects a single row from a model
     41 + ModelSelectOne {
     42 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     43 + ir: select_one::ModelSelectOne<'n, 's>,
     44 + },
     45 + // Operation that selects many rows from a model
     46 + ModelSelectMany {
     47 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     48 + ir: select_many::ModelSelectMany<'n, 's>,
     49 + },
     50 + // Operation that selects a single row from the model corresponding
     51 + // to the Global Id input.
     52 + NodeSelect(Option<node_field::NodeSelect<'n, 's>>),
     53 + CommandRepresentation {
     54 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     55 + ir: commands::CommandRepresentation<'n, 's>,
     56 + },
     57 +}
     58 + 
     59 +/// IR of a mutation root field
     60 +#[derive(Serialize, Debug)]
     61 +pub enum MutationRootField<'n, 's> {
     62 + // __typename field on mutation root
     63 + TypeName {
     64 + type_name: ast::TypeName,
     65 + },
     66 + CommandRepresentation {
     67 + selection_set: &'n gql::normalized_ast::SelectionSet<'s, GDS>,
     68 + ir: commands::CommandRepresentation<'n, 's>,
     69 + },
     70 +}
     71 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir/selection_set.rs
     1 +use hasura_authn_core::SessionVariables;
     2 +use indexmap::IndexMap;
     3 +use lang_graphql::ast::common::Alias;
     4 +use lang_graphql::normalized_ast;
     5 +use ndc_client as ndc;
     6 +use open_dds::types::{CustomTypeName, FieldName};
     7 +use serde::Serialize;
     8 +use std::collections::{BTreeMap, HashMap};
     9 + 
     10 +use super::model_selection::{self, ModelSelection};
     11 +use super::relationship::{self, RelationshipInfo, RemoteRelationshipInfo};
     12 +use crate::execute::error;
     13 +use crate::execute::global_id;
     14 +use crate::execute::model_tracking::UsagesCounts;
     15 +use crate::execute::remote_joins::types::{JoinLocations, MonotonicCounter};
     16 +use crate::execute::remote_joins::types::{Location, RemoteJoin};
     17 +use crate::metadata::resolved;
     18 +use crate::metadata::resolved::subgraph::Qualified;
     19 +use crate::schema::{
     20 + types::{Annotation, OutputAnnotation, RootFieldAnnotation},
     21 + GDS,
     22 +};
     23 + 
     24 +#[derive(Debug, Serialize)]
     25 +pub(crate) enum FieldSelection<'s> {
     26 + Column {
     27 + column: String,
     28 + },
     29 + LocalRelationship {
     30 + query: ModelSelection<'s>,
     31 + /// Relationship names needs to be unique across the IR. This field contains
     32 + /// the uniquely generated relationship name. `ModelRelationshipAnnotation`
     33 + /// contains a relationship name but that is the name from the metadata.
     34 + name: String,
     35 + relationship_info: RelationshipInfo<'s>,
     36 + },
     37 + RemoteRelationship {
     38 + ir: ModelSelection<'s>,
     39 + relationship_info: RemoteRelationshipInfo<'s>,
     40 + },
     41 +}
     42 + 
     43 +/// IR that represents the selected fields of an output type.
     44 +#[derive(Debug, Serialize)]
     45 +pub(crate) struct ResultSelectionSet<'s> {
     46 + // The fields in the selection set. They are stored in the form that would
     47 + // be converted and sent over the wire. Serialized the map as ordered to
     48 + // produce deterministic golden files.
     49 + pub(crate) fields: IndexMap<String, FieldSelection<'s>>,
     50 +}
     51 + 
     52 +fn build_global_id_fields(
     53 + global_id_fields: &Vec<FieldName>,
     54 + field_mappings: &BTreeMap<FieldName, resolved::types::FieldMapping>,
     55 + field_alias: &Alias,
     56 + fields: &mut IndexMap<String, FieldSelection>,
     57 +) -> Result<(), error::Error> {
     58 + for field_name in global_id_fields {
     59 + let field_mapping = field_mappings.get(field_name).ok_or_else(|| {
     60 + error::InternalEngineError::InternalGeneric {
     61 + description: format!("invalid global id field in annotation: {field_name:}"),
     62 + }
     63 + })?;
     64 + // Prefix the global column id with something that will be unlikely to be chosen
     65 + // by the user,
     66 + // to not have any conflicts with any of the fields
     67 + // in the selection set.
     68 + let global_col_id_alias = global_id::global_id_col_format(field_alias, field_name);
     69 + 
     70 + fields.insert(
     71 + global_col_id_alias,
     72 + FieldSelection::Column {
     73 + column: field_mapping.column.clone(),
     74 + },
     75 + );
     76 + }
     77 + Ok(())
     78 +}
     79 + 
     80 +/// Builds the IR from a normalized selection set
     81 +/// `field_mappings` is needed separately during IR generation and cannot be embedded
     82 +/// into the annotation itself because the same GraphQL type may have different field
     83 +/// sources depending on the model being queried.
     84 +pub(crate) fn generate_selection_set_ir<'s>(
     85 + selection_set: &normalized_ast::SelectionSet<'s, GDS>,
     86 + data_connector: &'s resolved::data_connector::DataConnector,
     87 + type_mappings: &'s BTreeMap<Qualified<CustomTypeName>, resolved::types::TypeMapping>,
     88 + field_mappings: &BTreeMap<FieldName, resolved::types::FieldMapping>,
     89 + session_variables: &SessionVariables,
     90 + usage_counts: &mut UsagesCounts,
     91 +) -> Result<ResultSelectionSet<'s>, error::Error> {
     92 + let mut fields = IndexMap::new();
     93 + for field in selection_set.fields.values() {
     94 + let field_call = field.field_call()?;
     95 + match field_call.info.generic {
     96 + annotation @ Annotation::Output(annotated_field) => match annotated_field {
     97 + OutputAnnotation::Field { name, .. } => {
     98 + let field_mapping = &field_mappings.get(name).ok_or_else(|| {
     99 + error::InternalEngineError::InternalGeneric {
     100 + description: format!("invalid field in annotation: {name:}"),
     101 + }
     102 + })?;
     103 + fields.insert(
     104 + field.alias.to_string(),
     105 + FieldSelection::Column {
     106 + column: field_mapping.column.clone(),
     107 + },
     108 + );
     109 + }
     110 + OutputAnnotation::RootField(RootFieldAnnotation::Introspection) => {}
     111 + OutputAnnotation::GlobalIDField { global_id_fields } => {
     112 + build_global_id_fields(
     113 + global_id_fields,
     114 + field_mappings,
     115 + &field.alias,
     116 + &mut fields,
     117 + )?;
     118 + }
     119 + OutputAnnotation::RelayNodeInterfaceID { typename_mappings } => {
     120 + // Even though we already have the value of the global ID field
     121 + // here, we try to re-compute the value of the same ID by decoding the ID.
     122 + // We do this because it simplifies the code structure.
     123 + // If the NDC were to accept key-value pairs from the v3-engine that will
     124 + // then be outputted as it is, then we could avoid this computation.
     125 + let type_name = field.selection_set.type_name.clone().ok_or(
     126 + error::InternalEngineError::InternalGeneric {
     127 + description: "typename not found while resolving NodeInterfaceId"
     128 + .to_string(),
     129 + },
     130 + )?;
     131 + let global_id_fields = typename_mappings.get(&type_name).ok_or(
     132 + error::InternalEngineError::InternalGeneric {
     133 + description: format!(
     134 + "Global ID fields not found of the type {}",
     135 + type_name
     136 + ),
     137 + },
     138 + )?;
     139 + 
     140 + build_global_id_fields(
     141 + global_id_fields,
     142 + field_mappings,
     143 + &field.alias,
     144 + &mut fields,
     145 + )?;
     146 + }
     147 + OutputAnnotation::RelationshipToModel(relationship_annotation) => {
     148 + fields.insert(
     149 + field.alias.to_string(),
     150 + relationship::generate_relationship_ir(
     151 + field,
     152 + relationship_annotation,
     153 + data_connector,
     154 + type_mappings,
     155 + session_variables,
     156 + usage_counts,
     157 + )?,
     158 + );
     159 + }
     160 + _ => Err(error::InternalEngineError::UnexpectedAnnotation {
     161 + annotation: annotation.clone(),
     162 + })?,
     163 + },
     164 + 
     165 + annotation => Err(error::InternalEngineError::UnexpectedAnnotation {
     166 + annotation: annotation.clone(),
     167 + })?,
     168 + }
     169 + }
     170 + Ok(ResultSelectionSet { fields })
     171 +}
     172 + 
     173 +/// Convert selection set IR (`ResultSelectionSet`) into NDC fields
     174 +pub(crate) fn process_selection_set_ir<'s>(
     175 + model_selection: &ResultSelectionSet<'s>,
     176 + join_id_counter: &mut MonotonicCounter,
     177 +) -> Result<
     178 + (
     179 + IndexMap<String, ndc::models::Field>,
     180 + JoinLocations<RemoteJoin<'s>>,
     181 + ),
     182 + error::Error,
     183 +> {
     184 + let mut ndc_fields = IndexMap::new();
     185 + let mut join_locations = JoinLocations::new();
     186 + for (alias, field) in &model_selection.fields {
     187 + match field {
     188 + FieldSelection::Column { column } => {
     189 + ndc_fields.insert(
     190 + alias.to_string(),
     191 + ndc::models::Field::Column {
     192 + column: column.clone(),
     193 + },
     194 + );
     195 + }
     196 + FieldSelection::LocalRelationship {
     197 + query,
     198 + name,
     199 + relationship_info: _,
     200 + } => {
     201 + let (relationship_query, jl) =
     202 + model_selection::ir_to_ndc_query(query, join_id_counter)?;
     203 + let ndc_field = ndc::models::Field::Relationship {
     204 + query: Box::new(relationship_query),
     205 + relationship: name.to_string(),
     206 + arguments: BTreeMap::new(),
     207 + };
     208 + if !jl.locations.is_empty() {
     209 + join_locations.locations.insert(
     210 + alias.clone(),
     211 + Location {
     212 + join_node: None,
     213 + rest: jl,
     214 + },
     215 + );
     216 + }
     217 + ndc_fields.insert(alias.to_string(), ndc_field);
     218 + }
     219 + FieldSelection::RemoteRelationship {
     220 + ir,
     221 + relationship_info,
     222 + } => {
     223 + // For all the left join fields, create an alias and inject
     224 + // them into the NDC IR
     225 + let mut join_columns = HashMap::new();
     226 + for ((src_field_alias, src_field), target_field) in &relationship_info.join_mapping
     227 + {
     228 + let lhs_alias = make_hasura_phantom_field(&src_field.column);
     229 + ndc_fields.insert(
     230 + lhs_alias.clone(),
     231 + ndc::models::Field::Column {
     232 + column: src_field.column.clone(),
     233 + },
     234 + );
     235 + join_columns.insert(
     236 + src_field_alias.clone(),
     237 + (lhs_alias.clone(), target_field.clone()),
     238 + );
     239 + }
     240 + // Construct the `JoinLocations` tree
     241 + let (ndc_ir, sub_join_locations) =
     242 + model_selection::ir_to_ndc_ir(ir, join_id_counter)?;
     243 + let rj_info = RemoteJoin {
     244 + target_ndc_ir: ndc_ir,
     245 + target_data_connector: ir.data_connector,
     246 + join_columns,
     247 + };
     248 + join_locations.locations.insert(
     249 + alias.clone(),
     250 + Location {
     251 + join_node: Some(rj_info),
     252 + rest: sub_join_locations,
     253 + },
     254 + );
     255 + }
     256 + };
     257 + }
     258 + Ok((ndc_fields, join_locations))
     259 +}
     260 + 
     261 +fn make_hasura_phantom_field(field_name: &str) -> String {
     262 + format!("__hasura_phantom_field__{}", field_name)
     263 +}
     264 + 
     265 +/// From the fields in `ResultSelectionSet`, collect relationships recursively
     266 +/// and create NDC relationship definitions
     267 +pub(crate) fn collect_relationships(
     268 + selection: &ResultSelectionSet,
     269 + relationships: &mut BTreeMap<String, ndc::models::Relationship>,
     270 +) -> Result<(), error::Error> {
     271 + for field in selection.fields.values() {
     272 + match field {
     273 + FieldSelection::Column { .. } => (),
     274 + FieldSelection::LocalRelationship {
     275 + query,
     276 + name,
     277 + relationship_info,
     278 + } => {
     279 + relationships.insert(
     280 + name.to_string(),
     281 + relationship::process_relationship_definition(relationship_info)?,
     282 + );
     283 + collect_relationships(&query.selection, relationships)?;
     284 + }
     285 + // we ignore remote relationships as we are generating relationship
     286 + // definition for one data connector
     287 + FieldSelection::RemoteRelationship { .. } => (),
     288 + };
     289 + }
     290 + Ok(())
     291 +}
     292 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ir.rs
     1 +pub mod arguments;
     2 +pub mod commands;
     3 +pub mod filter;
     4 +pub mod model_selection;
     5 +pub mod mutation_root;
     6 +pub mod order_by;
     7 +pub mod permissions;
     8 +pub mod query_root;
     9 +pub mod relationship;
     10 +pub mod root_field;
     11 +pub mod selection_set;
     12 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/model_tracking.rs
     1 +//! Track the models that were used in query.
     2 + 
     3 +use super::ir::root_field::{self, RootField};
     4 +use crate::metadata::resolved::subgraph::Qualified;
     5 +use indexmap::IndexMap;
     6 +use lang_graphql::ast::common::Alias;
     7 +use open_dds::{commands::CommandName, models::ModelName};
     8 +use serde::{Deserialize, Serialize};
     9 + 
     10 +#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
     11 +pub struct UsagesCounts {
     12 + pub models_used: Vec<ModelCount>,
     13 + pub commands_used: Vec<CommandCount>,
     14 +}
     15 + 
     16 +impl Default for UsagesCounts {
     17 + fn default() -> Self {
     18 + Self::new()
     19 + }
     20 +}
     21 + 
     22 +impl UsagesCounts {
     23 + pub fn new() -> Self {
     24 + UsagesCounts {
     25 + models_used: Vec::new(),
     26 + commands_used: Vec::new(),
     27 + }
     28 + }
     29 +}
     30 + 
     31 +#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
     32 +#[serde(rename_all = "camelCase")]
     33 +pub struct CommandCount {
     34 + pub command: Qualified<CommandName>,
     35 + pub count: usize,
     36 +}
     37 + 
     38 +#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
     39 +#[serde(rename_all = "camelCase")]
     40 +pub struct ModelCount {
     41 + pub model: Qualified<ModelName>,
     42 + pub count: usize,
     43 +}
     44 + 
     45 +// Get all the OpenDDS Models/commands that were for executing a query. We loop through
     46 +// each root field in the query and get the models/commands that were used in each root
     47 +// field. We then merge the models/commands used in each root field into a single map.
     48 +// That means, if the same model was used in multiple root fields, we will
     49 +// sum up the number of times that model was used.
     50 +pub fn get_all_usage_counts_in_query(ir: &IndexMap<Alias, RootField<'_, '_>>) -> UsagesCounts {
     51 + let mut all_usage_counts = UsagesCounts::new();
     52 + for ir_field in ir.values() {
     53 + match ir_field {
     54 + root_field::RootField::QueryRootField(ir) => match ir {
     55 + root_field::QueryRootField::TypeName { .. } => {}
     56 + root_field::QueryRootField::SchemaField { .. } => {}
     57 + root_field::QueryRootField::TypeField { .. } => {}
     58 + root_field::QueryRootField::ModelSelectOne { ir, .. } => {
     59 + let usage_counts = ir.usage_counts.clone();
     60 + extend_usage_count(usage_counts, &mut all_usage_counts);
     61 + }
     62 + root_field::QueryRootField::ModelSelectMany { ir, .. } => {
     63 + let usage_counts = ir.usage_counts.clone();
     64 + extend_usage_count(usage_counts, &mut all_usage_counts);
     65 + }
     66 + root_field::QueryRootField::NodeSelect(ir1) => match ir1 {
     67 + None => {}
     68 + Some(ir2) => {
     69 + let usage_counts = ir2.usage_counts.clone();
     70 + extend_usage_count(usage_counts, &mut all_usage_counts);
     71 + }
     72 + },
     73 + root_field::QueryRootField::CommandRepresentation { ir, .. } => {
     74 + let usage_counts = ir.usage_counts.clone();
     75 + extend_usage_count(usage_counts, &mut all_usage_counts);
     76 + }
     77 + },
     78 + root_field::RootField::MutationRootField(rf) => match rf {
     79 + root_field::MutationRootField::TypeName { .. } => {}
     80 + root_field::MutationRootField::CommandRepresentation { ir, .. } => {
     81 + let usage_counts = ir.usage_counts.clone();
     82 + extend_usage_count(usage_counts, &mut all_usage_counts);
     83 + }
     84 + },
     85 + }
     86 + }
     87 + all_usage_counts
     88 +}
     89 + 
     90 +fn extend_usage_count(usage_counts: UsagesCounts, all_usage_counts: &mut UsagesCounts) {
     91 + for model_count in usage_counts.models_used.into_iter() {
     92 + let countable_model = &model_count.model;
     93 + match all_usage_counts
     94 + .models_used
     95 + .iter_mut()
     96 + .find(|element| element.model == *countable_model)
     97 + {
     98 + None => {
     99 + all_usage_counts.models_used.push(model_count);
     100 + }
     101 + Some(existing_count) => {
     102 + existing_count.count += model_count.count;
     103 + }
     104 + }
     105 + }
     106 + for command_count in usage_counts.commands_used.into_iter() {
     107 + let countable_model = &command_count.command;
     108 + match all_usage_counts
     109 + .commands_used
     110 + .iter_mut()
     111 + .find(|element| element.command == *countable_model)
     112 + {
     113 + None => {
     114 + all_usage_counts.commands_used.push(command_count);
     115 + }
     116 + Some(existing_count) => {
     117 + existing_count.count += command_count.count;
     118 + }
     119 + }
     120 + }
     121 +}
     122 + 
     123 +pub fn count_model(model: Qualified<ModelName>, all_usage_counts: &mut UsagesCounts) {
     124 + match all_usage_counts
     125 + .models_used
     126 + .iter_mut()
     127 + .find(|element| element.model == model)
     128 + {
     129 + None => {
     130 + let model_count = ModelCount {
     131 + model: model.clone(),
     132 + count: 1,
     133 + };
     134 + all_usage_counts.models_used.push(model_count);
     135 + }
     136 + Some(existing_model) => {
     137 + existing_model.count += 1;
     138 + }
     139 + }
     140 +}
     141 + 
     142 +pub fn count_command(command: Qualified<CommandName>, all_usage_counts: &mut UsagesCounts) {
     143 + match all_usage_counts
     144 + .commands_used
     145 + .iter_mut()
     146 + .find(|element| element.command == command)
     147 + {
     148 + None => {
     149 + let command_count = CommandCount {
     150 + command: command.clone(),
     151 + count: 1,
     152 + };
     153 + all_usage_counts.commands_used.push(command_count);
     154 + }
     155 + Some(existing_command) => {
     156 + existing_command.count += 1;
     157 + }
     158 + }
     159 +}
     160 + 
     161 +#[test]
     162 +fn test_extend_usage_count() {
     163 + let model_count1 = ModelCount {
     164 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     165 + count: 1,
     166 + };
     167 + let model_count2 = ModelCount {
     168 + model: Qualified::new("subgraph".to_string(), ModelName("model2".to_string())),
     169 + count: 5,
     170 + };
     171 + let model_count3 = ModelCount {
     172 + model: Qualified::new("subgraph".to_string(), ModelName("model3".to_string())),
     173 + count: 2,
     174 + };
     175 + let command_count1 = CommandCount {
     176 + command: Qualified::new("subgraph".to_string(), CommandName("command1".to_string())),
     177 + count: 2,
     178 + };
     179 + let command_count2 = CommandCount {
     180 + command: Qualified::new("subgraph".to_string(), CommandName("command2".to_string())),
     181 + count: 1,
     182 + };
     183 + let command_count3 = CommandCount {
     184 + command: Qualified::new("subgraph".to_string(), CommandName("command3".to_string())),
     185 + count: 3,
     186 + };
     187 + let usage_counts = UsagesCounts {
     188 + models_used: vec![model_count1, model_count2.clone()],
     189 + commands_used: vec![command_count1, command_count2.clone()],
     190 + };
     191 + let mut aggregator = UsagesCounts {
     192 + models_used: vec![model_count2, model_count3],
     193 + commands_used: vec![command_count2, command_count3],
     194 + };
     195 + extend_usage_count(usage_counts, &mut aggregator);
     196 + let expected = UsagesCounts {
     197 + models_used: vec![
     198 + ModelCount {
     199 + model: Qualified::new("subgraph".to_string(), ModelName("model2".to_string())),
     200 + count: 10,
     201 + },
     202 + ModelCount {
     203 + model: Qualified::new("subgraph".to_string(), ModelName("model3".to_string())),
     204 + count: 2,
     205 + },
     206 + ModelCount {
     207 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     208 + count: 1,
     209 + },
     210 + ],
     211 + commands_used: vec![
     212 + CommandCount {
     213 + command: Qualified::new(
     214 + "subgraph".to_string(),
     215 + CommandName("command2".to_string()),
     216 + ),
     217 + count: 2,
     218 + },
     219 + CommandCount {
     220 + command: Qualified::new(
     221 + "subgraph".to_string(),
     222 + CommandName("command3".to_string()),
     223 + ),
     224 + count: 3,
     225 + },
     226 + CommandCount {
     227 + command: Qualified::new(
     228 + "subgraph".to_string(),
     229 + CommandName("command1".to_string()),
     230 + ),
     231 + count: 2,
     232 + },
     233 + ],
     234 + };
     235 + assert_eq!(aggregator, expected);
     236 +}
     237 + 
     238 +#[test]
     239 +fn test_counter_functions() {
     240 + let mut aggregator = UsagesCounts::new();
     241 + count_command(
     242 + Qualified::new("subgraph".to_string(), CommandName("command1".to_string())),
     243 + &mut aggregator,
     244 + );
     245 + assert_eq!(
     246 + aggregator,
     247 + UsagesCounts {
     248 + models_used: Vec::new(),
     249 + commands_used: vec![CommandCount {
     250 + command: Qualified::new(
     251 + "subgraph".to_string(),
     252 + CommandName("command1".to_string())
     253 + ),
     254 + count: 1,
     255 + }]
     256 + }
     257 + );
     258 + count_command(
     259 + Qualified::new("subgraph".to_string(), CommandName("command1".to_string())),
     260 + &mut aggregator,
     261 + );
     262 + assert_eq!(
     263 + aggregator,
     264 + UsagesCounts {
     265 + models_used: Vec::new(),
     266 + commands_used: vec![CommandCount {
     267 + command: Qualified::new(
     268 + "subgraph".to_string(),
     269 + CommandName("command1".to_string())
     270 + ),
     271 + count: 2,
     272 + }]
     273 + }
     274 + );
     275 + count_model(
     276 + Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     277 + &mut aggregator,
     278 + );
     279 + assert_eq!(
     280 + aggregator,
     281 + UsagesCounts {
     282 + models_used: vec![ModelCount {
     283 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     284 + count: 1,
     285 + }],
     286 + commands_used: vec![CommandCount {
     287 + command: Qualified::new(
     288 + "subgraph".to_string(),
     289 + CommandName("command1".to_string())
     290 + ),
     291 + count: 2,
     292 + }]
     293 + }
     294 + );
     295 + count_model(
     296 + Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     297 + &mut aggregator,
     298 + );
     299 + assert_eq!(
     300 + aggregator,
     301 + UsagesCounts {
     302 + models_used: vec![ModelCount {
     303 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     304 + count: 2,
     305 + }],
     306 + commands_used: vec![CommandCount {
     307 + command: Qualified::new(
     308 + "subgraph".to_string(),
     309 + CommandName("command1".to_string())
     310 + ),
     311 + count: 2,
     312 + }]
     313 + }
     314 + );
     315 + count_model(
     316 + Qualified::new("subgraph".to_string(), ModelName("model2".to_string())),
     317 + &mut aggregator,
     318 + );
     319 + assert_eq!(
     320 + aggregator,
     321 + UsagesCounts {
     322 + models_used: vec![
     323 + ModelCount {
     324 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     325 + count: 2,
     326 + },
     327 + ModelCount {
     328 + model: Qualified::new("subgraph".to_string(), ModelName("model2".to_string())),
     329 + count: 1,
     330 + }
     331 + ],
     332 + commands_used: vec![CommandCount {
     333 + command: Qualified::new(
     334 + "subgraph".to_string(),
     335 + CommandName("command1".to_string())
     336 + ),
     337 + count: 2,
     338 + }]
     339 + }
     340 + );
     341 + count_command(
     342 + Qualified::new("subgraph".to_string(), CommandName("command2".to_string())),
     343 + &mut aggregator,
     344 + );
     345 + assert_eq!(
     346 + aggregator,
     347 + UsagesCounts {
     348 + models_used: vec![
     349 + ModelCount {
     350 + model: Qualified::new("subgraph".to_string(), ModelName("model1".to_string())),
     351 + count: 2,
     352 + },
     353 + ModelCount {
     354 + model: Qualified::new("subgraph".to_string(), ModelName("model2".to_string())),
     355 + count: 1,
     356 + }
     357 + ],
     358 + commands_used: vec![
     359 + CommandCount {
     360 + command: Qualified::new(
     361 + "subgraph".to_string(),
     362 + CommandName("command1".to_string())
     363 + ),
     364 + count: 2,
     365 + },
     366 + CommandCount {
     367 + command: Qualified::new(
     368 + "subgraph".to_string(),
     369 + CommandName("command2".to_string())
     370 + ),
     371 + count: 1,
     372 + }
     373 + ]
     374 + }
     375 + );
     376 +}
     377 + 
  • ■ ■ ■ ■ ■ ■
    v3/engine/src/execute/ndc.rs
     1 +use serde_json as json;
     2 + 
     3 +use gql::normalized_ast;
     4 +use lang_graphql as gql;
     5 +use lang_graphql::ast::common as ast;
     6 +use ndc_client as ndc;
     7 +use tracing_util::{set_attribute_on_active_span, AttributeVisibility, SpanVisibility};
     8 + 
     9 +use super::error;
     10 +use super::process_response::process_command_rows;
     11 +use super::query_plan::ProcessResponseAs;
     12 +use crate::metadata::resolved;
     13 +use crate::schema::GDS;
     14 + 
     15 +/// Executes a NDC operation
     16 +pub async fn execute_ndc_query<'n, 's>(
     17 + http_client: &reqwest::Client,
     18 + query: ndc::models::QueryRequest,
     19 + data_connector: &resolved::data_connector::DataConnector,
     20 + execution_span_attribute: String,
     21 + field_span_attribute: String,
     22 +) -> Result<Vec<ndc::models::RowSet>, error::Error> {
     23 + let tracer = tracing_util::global_tracer();
     24 + tracer
     25 + .in_span_async("execute_ndc_query", SpanVisibility::User, || {
     26 + Box::pin(async {
     27 + set_attribute_on_active_span(
     28 + AttributeVisibility::Default,
     29 + "operation",
     30 + execution_span_attribute,
     31 + );
     32 + set_attribute_on_active_span(
     33 + AttributeVisibility::Default,
     34 + "field",
     35 + field_span_attribute,
     36 + );
     37 + let connector_response =
     38 + fetch_from_data_connector(http_client, query, data_connector).await?;
     39 + Ok(connector_response.0)
     40 + })
     41 + })
     42 + .await
     43 +}
     44 + 
     45 +pub(crate) async fn fetch_from_data_connector<'s>(
     46 + http_client: &reqwest::Client,
     47 + query_request: ndc::models::QueryRequest,
     48 + data_connector: &resolved::data_connector::DataConnector,
     49 +) -> Result<ndc::models::QueryResponse, error::Error> {
     50 + let tracer = tracing_util::global_tracer();
     51 + tracer
     52 + .in_span_async(
     53 + "fetch_from_data_connector",
     54 + SpanVisibility::Internal,
     55 + || {
     56 + Box::pin(async {
     57 + let ndc_config = ndc::apis::configuration::Configuration {
     58 + base_path: data_connector.url.get_url(ast::OperationType::Query),
     59 + user_agent: None,
     60 + // This is isn't expensive, reqwest::Client is behind an Arc
     61 + client: http_client.clone(),
     62 + headers: data_connector.headers.0.clone(),
     63 + };
     64 + ndc::apis::default_api::query_post(&ndc_config, query_request)
     65 + .await
     66 + .map_err(error::Error::from) // ndc_client::apis::Error -> InternalError -> Error
     67 + })
     68 + },
     69 + )
     70 + .await
     71 +}
     72 + 
     73 +/// Executes a NDC mutation
     74 +pub(crate) async fn execute_ndc_mutation<'n, 's>(
     75 + http_client: &reqwest::Client,
     76 + query: ndc::models::MutationRequest,
     77 + data_connector: &resolved::data_connector::DataConnector,
     78 + selection_set: &'n normalized_ast::SelectionSet<'s, GDS>,
     79 + execution_span_attribute: String,
     80 + field_span_attribute: String,
     81 + process_response_as: ProcessResponseAs<'s>,
     82 +) -> Result<json::Value, error::Error> {
     83 + let tracer = tracing_util::global_tracer();
     84 + tracer
     85 + .in_span_async("execute_ndc_mutation", SpanVisibility::User, || {
     86 + Box::pin(async {
     87 + set_attribute_on_active_span(
     88 + AttributeVisibility::Default,
     89 + "operation",
     90 + execution_span_attribute,
     91 + );
     92 + set_attribute_on_active_span(
     93 + AttributeVisibility::Default,
     94 + "field",
     95 + field_span_attribute,
     96 + );
     97 + let connector_response =
     98 + fetch_from_data_connector_mutation(http_client, query, data_connector).await?;
     99 + // Post process the response to add the `__typename` fields
     100 + tracer.in_span("process_response", SpanVisibility::Internal, || {
     101 + // NOTE: NDC returns a `Vec<RowSet>` (to account for
     102 + // variables). We don't use variables in NDC queries yet,
     103 + // hence we always pick the first `RowSet`.
     104 + let mutation_results = connector_response
     105 + .operation_results
     106 + .into_iter()
     107 + .next()
     108 + .ok_or(error::InternalDeveloperError::BadGDCResponse {
     109 + summary: "missing rowset".into(),
     110 + })?;
     111 + match process_response_as {
     112 + ProcessResponseAs::CommandResponse {
     113 + command_name,
     114 + type_container,
     115 + } => {
     116 + let result = process_command_rows(
     117 + command_name,
     118 + mutation_results.returning,
     119 + selection_set,
     120 + type_container,
     121 + )?;
     122 + Ok(json::to_value(result).map_err(error::Error::from))
     123 + }
     124 + _ => Err(error::Error::from(
     125 + error::InternalEngineError::InternalGeneric {
     126 + description: "mutations without commands are not supported yet"
     127 + .into(),
     128 + },
     129 + )),
     130 + }?
     131 + })
     132 + })
     133 + })
     134 + .await
     135 +}
     136 + 
     137 +pub(crate) async fn fetch_from_data_connector_mutation<'s>(
     138 + http_client: &reqwest::Client,
     139 + query_request: ndc::models::MutationRequest,
     140 + data_connector: &resolved::data_connector::DataConnector,
     141 +) -> Result<ndc::models::MutationResponse, error::Error> {
     142 + let tracer = tracing_util::global_tracer();
     143 + tracer
     144 + .in_span_async(
     145 + "fetch_from_data_connector",
     146 + SpanVisibility::Internal,
     147 + || {
     148 + Box::pin(async {
     149 + let gdc_config = ndc::apis::configuration::Configuration {
     150 + base_path: data_connector.url.get_url(ast::OperationType::Mutation),
     151 + user_agent: None,
     152 + // This is isn't expensive, reqwest::Client is behind an Arc
     153 + client: http_client.clone(),
     154 + headers: data_connector.headers.0.clone(),
     155 + };
     156 + ndc::apis::default_api::mutation_post(&gdc_config, query_request)
     157 + .await
     158 + .map_err(error::Error::from) // ndc_client::apis::Error -> InternalError -> Error
     159 + })
     160 + },
     161 + )
     162 + .await
     163 +}
     164 + 
Please wait...
Page is in error, reload to recover