## Description
This PR iterates on #459.
Rather than serving the engine metadata it serves an arbitrary file,
given by the command line argument `--introspection-metadata`.
Specifying this argument gives rise to endpoints `/metadata` and
`/metadata-hash`.
![image](https://github.com/hasura/v3-engine/assets/358550/63040f02-876a-4c29-8cf1-52a305ffff67)
Update: We only load the file in at engine startup and serve that
version. Changing the file on disk will not change what the engine
serves.
---------
Co-authored-by: Gil Mizrahi <[email protected]>
V3_GIT_ORIGIN_REV_ID: db88adb5c08c4489cc1abd5fb5236b8d5ba51b9a
<!-- Thank you for submitting this PR! :) -->
## Description
Previously we moved all our types around in one big bucket, meaning we
often had to check we had the thing we wanted, this splits it up so
dependencies are more granular and clearer.
This means instead of passing `types` around, we'll have both
`scalar_types` or `object_types`. Usually just `object_types` though.
V3_GIT_ORIGIN_REV_ID: 6a6b8d6265b0391f8910f3d4f8932ad151453c18
## Description
As a temporary means of supporting a local development setup, this PR
adds a `/metadata` endpoint that serves the raw metadata that the engine
was started with.
![image](https://github.com/hasura/v3-engine/assets/358550/bf34c3f8-d153-4a93-9044-dbaa15299481)
V3_GIT_ORIGIN_REV_ID: 44c552cfe29ee587fa0d383f7788aacc5579770f
<!-- Thank you for submitting this PR! :) -->
## Description
As per https://github.com/hasura/v3-engine/pull/450, break out creation
of `data_connectors` info (and related types) into it's own files.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 7a8d445217a4fac2bbb135aa48baa20a0789e785
This injects trace context headers into requests to the auth hook,
allowing us to figure out how much time is spent here.
I added a basic tracing setup to the dev-auth-webhook, using
`tracing-util`, allowing me to verify that this works. This required
moving the Dockerfile to the root so the context contains the
`tracing-util` crate too.
I have also fixed the reference agent (by updating it), and patched our
Docker Compose files to correctly set up connectivity to Jaeger.
V3_GIT_ORIGIN_REV_ID: 2ff930bda4147d00dcc73268a814b08c8a07a359
<!-- Thank you for submitting this PR! :) -->
## Description
This is an attempt to somewhat document how roles / annotations work in
`v3-engine`. The main purpose of this exercise was to solidify my
understanding, so I would very much welcome any corrections.
V3_GIT_ORIGIN_REV_ID: 28600998c8a01ef7f95198b44b875f4f14873793
<!-- Thank you for submitting this PR! :) -->
## Description
I needed this, so I made it. It's nothing too complex: we just
pretty-print the `schemars` schema for the root `Metadata` type.
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes in this PR have any user-facing impact. See [changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [ ] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [ ] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
_Replace with changelog entry_
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: a9e75bfa06c35577c17d8cbf0d021b1f56826a28
## Description
We have no plans to make changes to this feature, so we consider it production ready.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10769
GitOrigin-RevId: 08319fb6c464ca5c11c033be5abc0b7989e1da4f
## Description
We have no plans to make changes to this feature, so we consider it production ready.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10769
GitOrigin-RevId: 08319fb6c464ca5c11c033be5abc0b7989e1da4f
<!-- Thank you for submitting this PR! :) -->
## Description
Further breaking up the big error type. Functional no-op.
V3_GIT_ORIGIN_REV_ID: d34acb7fd6421c250c214b133b8a107e03155c70
<!-- Thank you for submitting this PR! :) -->
## Description
Resolving metadata is pretty messy, so we're breaking it into more
explicit steps. This breaks out the first, and arguably most trivial
step.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: eca1ce3276f826e769ac4a29d62504542e41848d
Generate default `deprecationReason` in GraphQL schema for OpenDd
metadata marked as deprecated without a reason.
V3_GIT_ORIGIN_REV_ID: 6979bd264b5c11d24b6c634115b6fbd8405a5ba6
<!-- Thank you for submitting this PR! :) -->
## Description
More work to break down the giant `Error` type in metadata resolve step.
Functional no-op.
V3_GIT_ORIGIN_REV_ID: 8cfa4ad0bef254e93241d254123910bf3d5357f3
<!-- Thank you for submitting this PR! :) -->
## Description
Following the approach taken here:
https://github.com/hasura/ndc-postgres/pull/402
This moves the `clippy` settings into the Cargo workspace file instead
of passing them for each invocation.
We enable all pedantic settings, run `cargo clippy --fix` to auto fix a
few things, and then manually disable all other lints.
Plenty of them are worth enabling and fixing in future IMO.
---------
Co-authored-by: Samir Talwar <[email protected]>
V3_GIT_ORIGIN_REV_ID: aa0e6ccb8d72a7393e14b5c58b82077a67d9cb15
<!-- Thank you for submitting this PR! :) -->
## Description
- Update to `ndc-spec`-`0.1.2`
- Use `ndc_models` since `ndc_client` was removed
- Use `Int32` in `custom_connector` everywhere
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
V3_GIT_ORIGIN_REV_ID: 00c6e7a6c213ab0de31303a93f8446c1d371c538
<!-- Thank you for submitting this PR! :) -->
# ⚠️ Behaviour change in query execution
## Description
<!--
Questions to consider answering:
1. What user-facing changes are being made?
2. What are issues related to this PR? (Consider adding `(close
#<issue-no>)` to the PR title)
3. What is the conceptual design behind this PR?
4. How can this PR be tested/verified?
5. Does the PR have limitations?
6. Does the PR introduce breaking changes?
-->
This PR fixes a bug (different behaviour from v2) with boolean
expressions.
Slack thread:
https://hasurahq.slack.com/archives/C066TKMH79R/p1711987325682919
JIRA: https://hasurahq.atlassian.net/browse/V3ENGINE-67
## Changelog
- Add a changelog entry (in the "Changelog entry" section below) if the
changes in this PR have any user-facing impact. See [changelog
guide](https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide).
- If no changelog is required ignore/remove this section and add a
`no-changelog-required` label to the PR.
### Product
_(Select all products this will be available in)_
- [ ] community-edition
- [ ] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [ ] enhancement
- [ ] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
_Replace with changelog entry_
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 4fcfc16a9a88ed6362315ca2f47911e0c97b7829
<!-- Thank you for submitting this PR! :) -->
## Description
Fix a bug which was causing an internal error when `null` was returned
by NDC for a field of array or object type.
### Product
_(Select all products this will be available in)_
- [x] community-edition
- [x] cloud
<!-- product : end : DO NOT REMOVE -->
### Type
<!-- See changelog structure:
https://github.com/hasura/graphql-engine-mono/wiki/Changelog-Guide#structure-of-our-changelog
-->
_(Select only one. In case of multiple, choose the most appropriate)_
- [ ] highlight
- [ ] enhancement
- [x] bugfix
- [ ] behaviour-change
- [ ] performance-enhancement
- [ ] security-fix
<!-- type : end : DO NOT REMOVE -->
### Changelog entry
<!--
- Add a user understandable changelog entry
- Include all details needed to understand the change. Try including
links to docs or issues if relevant
- For Highlights start with a H4 heading (#### <entry title>)
- Get the changelog entry reviewed by your team
-->
Fix a bug which was causing an internal error when `null` was returned
by NDC for a field of array or object type.
<!-- changelog-entry : end : DO NOT REMOVE -->
<!-- changelog : end : DO NOT REMOVE -->
V3_GIT_ORIGIN_REV_ID: 5c935ccd6720b5e5966dfa87c2e21dbb7a2b36f2
- Introduce a field in NDC `Configuration` struct that carries an
optional limit (`usize`) value.
- When set, reject NDC response that is greater than the limit.
- Define a `HttpContext` struct that captures both `reqwest::Client` and
an optional limit value. Replace the `http_client` argument with
`http_context: &HttpContext` in all execute related functions.
- The `execute_query` function caller in multitenant code need to pass a
reference to `HttpContext` with appropriate NDC response size limit.
V3_GIT_ORIGIN_REV_ID: 85a3647c4d136cc8d887f343736cc011166f036f
<!-- Thank you for submitting this PR! :) -->
## Description
Set our `ndc-postgres` connector in tests to use new mutations versions
so we can test Boolean Expressions. Also does some house-keeping, like
ensuring we pull the latest `ndc-postgres` in CI and exposing `8080`
from `ndc-postgres` to fix local dev flow.
V3_GIT_ORIGIN_REV_ID: 4c92670e9976a3f75ec31e1224079799380ef6e2
<!-- Thank you for submitting this PR! :) -->
## Description
We'll shortly be adding `BooleanExpression` to `ValueExpression`, which
will require resolving the internal `ModelPredicate`. This PR adds a
resolving step for `ValueExpression` to simplify that later step. It is
essentially a no-op to introduce a new type.
V3_GIT_ORIGIN_REV_ID: 8bfe4a180e12ae50d8f131072886054c0e618ec4
- Move redundant code in `client.rs` into a separate function.
- Doc utility functions through comments
V3_GIT_ORIGIN_REV_ID: f172ec2309b48c627f4ab9179efcb4c278e82989
This seems appropriate now that we've stabilized the new configuration.
Of note are the configuration updates and the use of an environment
variable to specify the connection URI. This upgrade also fixes the
health checks.
Regenerating the configuration lost the table descriptions, which seems
to be because they were not present in the Chinook SQL. I have dragged
the Chinook SQL in from ndc-postgres and kept it separate from the
initialization of other tables.
The auto-generated configuration is slightly different from the
manually-created configuration in that the collection names are
singular, not plural. This means that I had to change a lot of test
metadata files too.
V3_GIT_ORIGIN_REV_ID: 2b66fd3049aaf4daeb386915ea3b64a209b1f393
…arger ones
…the goal being to save on data transfer costs, libdeflate being much faster than zlib for larger inputs and at higher compression levels. A few notes:
In last month...
- 95% of response bodies > 20kB compress below 32% (with zlib level 1)
- The 10% of responses > 20kB comprise 75% egress traffic to clients
- libdeflate at level 6 is comparable in performance to zlib level 1, and twice as fast as zlib level 6
- We expect compressing 20kB+ response bodies at level 6 to reduce data transfer to clients by 25% or so (although this is difficult to predict accurately)
The new libdeflate bindings used here also need review: https://github.com/hasura/libdeflate-hs
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/10341
GitOrigin-RevId: 6c86d524ce7577c30717e2a57e06c185405cbbfb
This fixes the poor memory behavior we observe in #9447. Observe a
haskell program has layers of memory usage/management, from inner to
outer:
1) objects on the haskell heap aka "live data"
2) blocks of memory managed by the haskell RTS, containing live data
(and including overheads from fragmentation and space for
copying-into during GC)
3) foreign data malloc'd and free'd from haskell, but indirectly (during
GC, "finalizers" are run that call `free` generally)
4) the implementation of malloc itself we're linked against maintains a
list of blocks of free memory requested from the OS; it decides when
to return blocks back (reflected in lower RSS from top), and
fragmentation is also a concern here
4a) should the malloc decide to return memory, it might use MADV_FREE
or another mechanism which won't be reflected in RSS unless there is
memory pressure. This further complicates things
(1) and (2) can be monitored from /dev/rts_stats . (3) can be monitored
with heaptrack, valgrind, etc. (4) was where our issues were here.
mimalloc helps because:
- it seems to handle fragmentation better, for the large response sizes
in our reproo
- it is more eager to return memory back to the OS (but note: newer
versions use MADV_FREE and probably aren't useable for us. See:
https://github.com/microsoft/mimalloc/issues/776 )
note that `static.o` gets linked second, but according to my tests the
order shouldn't matter (despite what mimalloc docs suggest):
```
gcc '-fuse-ld=lld' -Wl,--no-as-needed -o
/home/me/Work/hasura/graphql-engine-mono/dist-newstyle/build/x86_64-linux/ghc-9.4.5/graphql-engine-1.0.0/x/graphql-engine/opt/build/graphql-engine/graphql-engine
-lm -no-pie -Wl,--gc-sections
/home/me/Work/hasura/graphql-engine-mono/dist-newstyle/build/x86_64-linux/ghc-9.4.5/graphql-engine-1.0.0/x/graphql-engine/opt/build/graphql-engine/graphql-engine-tmp/Main.o
/home/me/Work/hasura/graphql-engine-mono/dist-newstyle/build/x86_64-linux/ghc-9.4.5/graphql-engine-1.0.0/x/graphql-engine/opt/build/graphql-engine/graphql-engine-tmp/../preload-mimalloc/mimalloc/src/static.o
…
```
NOTE: the promptness of memory reclamation depends on both
HASURA_GRAPHQL_PG_TIMEOUT and HASURA_GRAPHQL_PG_CONN_LIFETIME. By
default a connection's resources will be GC'd after 3min when idle, or
after no more than 10 minutes.
GitOrigin-RevId: 9b522c39159b4c710c5672cc8c62c5c723d4bd13
Previously when users had an idle system with a large working set (i.e. large schema) they would likely see CPU spikes every 10 seconds.
See: https://github.com/hasura/graphql-engine/issues/9592#issuecomment-1580543694
Now we perform a lighter-weight minor GC in that case.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9841
GitOrigin-RevId: 2a49a83c4b763546a901558641ab9b6460ebffd9
context: This is foundation work, before we change how the server chooses to compress or not
part of effort: #5518
-----
Prior to this change it was difficult to understand how the functionality in this module related to the semantics of Accept-Encoding. We also didn't correctly handle directives with qvalues.
After this change certain technical infelicities are called out without modifying the behavior of the server; for instance we continue to fall back to identity (no compression) in the case where technically we're supposed to return 406, and we also continue to treat `*` conservatively as meaning “use no compression”.
The only external change here is `gzip;q=x.y` now results in a zipped response.
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7213
GitOrigin-RevId: 93fdbf4ac27c5e9daa15b51842dee9ad9d5af140
This increases compile times for full build of exe:graphql-engine from
13min to 16min on my machine, less on CI.
NOTE: O1 on its own improves compile time slightly (~80 sec) and doesn't
negatively impact performance at all, seemingly:
https://github.com/hasura/graphql-engine-mono/pull/7029
GitOrigin-RevId: 040851acb57f63943a48da03004c40e31c333c75
fwiw: I was looking here because ghc-debug showed many closures associated with the Applicative instance,
but defining Monoid/Semigroup by hand and inlining didn't seem to have any effect
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/7026
GitOrigin-RevId: 22bb57950bd4f837ce7e33ac9d15fa48a4cb3558
introspection data from 'result' is retained unnecessarily for
non-admin roles where we just need the validation effect.
live_bytes -6% for huge_schemaa
Also force remoteSchemaErrors, though I don't know if this has any
performance effect
GitOrigin-RevId: ac41019b309043065b176f99606c316523f53f00
Just forcing some of the most numerous thunks (with -hi profiling), it
seems some of these were retaining significant amount of data
GitOrigin-RevId: 4d7b22d1016330d31b19da96282b68bbed5f1907
This improves memory residency, at least before queries arrive.
Compile time and binary size seem about the same.
In starting to look at a dominator tree of the heap loading chinook, I
saw a chain of alternating function/InputFieldsParser closures from
updateOperator, and I was curious what would happen if I inlined some of
the continuation-passing -like schema parser functions. It looks like it
forces some function closure thunks, but I don't have more insight than
that.
GitOrigin-RevId: 0ebfadfa258e3d15e9e556afe7b54a3c585d2ef4
You will need the fork of 9.2.4 that we're using (for now):
```
ghcup -c -n install ghc --force -u "https://storage.googleapis.com/graphql-engine-cdn.hasura.io/ghc-bindists/ghc-x86_64-deb10-linux-9.2.4-hasura-fix.tar.xz" 9.2.4
```
or for m1 mac:
```
ghcup -c -n install ghc --force -u "https://storage.googleapis.com/graphql-engine-cdn.hasura.io/ghc-bindists/ghc-arm64-apple-darwin-9.2.4-hasura-fix.tar.xz"
```
Samir is working on a nix build for nix folx
PR-URL: https://github.com/hasura/graphql-engine-mono/pull/6154
GitOrigin-RevId: c11a2598a7c75e2c315e36f3d6f0b488febfccd4