Projects STRLCPY graphql-engine Commits 1f50b241
🤬
  • Merge branch 'main' into stable

    GitOrigin-RevId: 733058f9b7502d56c70070cab93ebfb85de9f37e
  • Loading...
  • rikinsk committed with hasura-bot 2 months ago
    1f50b241
    1 parent a8630db2
Revision indexing in progress... (symbol navigation in revisions will be accurate after indexed)
  • ■ ■ ■ ■
    cli/README.md
    skipped 18 lines
    19 19  
    20 20   You can also install a specific version of the CLI by providing the `VERSION` variable:
    21 21   ```bash
    22  - curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
     22 + curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
    23 23   ```
    24 24   
    25 25  - Windows
    skipped 32 lines
  • ■ ■ ■ ■ ■ ■
    cli/get.sh
    skipped 43 lines
    44 44  # version=${VERSION:-`echo $(curl -s -f -H 'Content-Type: application/json' \
    45 45   # https://releases.hasura.io/graphql-engine?agent=cli-get.sh) | sed -n -e "s/^.*\"$release\":\"\([^\",}]*\)\".*$/\1/p"`}
    46 46   
    47  -version=${VERSION:-v2.37.0}
     47 +version=${VERSION:-v2.38.0}
    48 48   
    49 49  if [ ! $version ]; then
    50 50   log "${YELLOW}"
    skipped 11 lines
    62 62   
    63 63  log "${YELLOW}"
    64 64  log NOTE: Install a specific version of the CLI by using VERSION variable
    65  -log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash'
     65 +log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash'
    66 66  log "${NC}"
    67 67   
    68 68  # check for existing hasura installation
    skipped 92 lines
  • ■ ■ ■ ■ ■ ■
    docs/.gitignore
    skipped 30 lines
    31 31  .tool-versions
    32 32   
    33 33  spell_check_results.txt
     34 + 
     35 +.env*
  • ■ ■ ■ ■ ■ ■
    docs/docs/auth/authorization/permissions/row-level-permissions.mdx
    skipped 295 lines
    296 296  an array for your values. If your session variable value is already an array, you can click the `[X-Hasura-Allowed-Ids]`
    297 297  suggestion to remove the brackets and set your session variable in its place.
    298 298   
     299 +Here is an example of an array-based session variable:
     300 + 
     301 +```bash
     302 +X-Hasura-Allowed-Ids: {1,2,3}
     303 +```
     304 + 
     305 +And the related permission configuration:
     306 + 
     307 +```yaml
     308 +permission:
     309 + filter:
     310 + user_id:
     311 + _in: X-Hasura-Allowed-Ids
     312 +```
     313 + 
    299 314  :::
    300 315   
    301 316  ## Permissions with relationships or nested objects {#relationships-in-permissions}
    skipped 330 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/athena/getting-started/index.mdx
    skipped 33 lines
    34 34  2. [Docker](/databases/athena/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon Athena
    35 35   service to Hasura.
    36 36   
     37 +:::info Using Kubernetes?
     38 + 
     39 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     40 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     41 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     42 + 
     43 +:::
     44 + 
  • ■ ■ ■ ■ ■
    docs/docs/databases/bigquery/getting-started/index.mdx
    skipped 15 lines
    16 16   
    17 17  Here are two ways you can get started with Hasura:
    18 18   
    19  -1. [Hasura Cloud](/databases/bigquery/getting-started/cloud.mdx): Access and manage your BigQuery
    20  -database from Hasura Cloud.
     19 +1. [Hasura Cloud](/databases/bigquery/getting-started/cloud.mdx): Access and manage your BigQuery database from Hasura
     20 + Cloud.
    21 21  2. [Docker](/databases/bigquery/getting-started/docker.mdx): Run Hasura with Docker and then connect your BigQuery
    22  -database to Hasura.
     22 + database to Hasura.
     23 + 
     24 +:::info Using Kubernetes?
     25 + 
     26 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     27 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     28 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     29 + 
     30 +:::
    23 31   
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/clickhouse/getting-started/index.mdx
    skipped 18 lines
    19 19  2. [Docker](/databases/clickhouse/getting-started/docker.mdx): Run Hasura with Docker and then connect your ClickHouse
    20 20   service to Hasura.
    21 21   
     22 +:::info Using Kubernetes?
     23 + 
     24 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     25 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     26 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     27 + 
     28 +:::
     29 + 
  • ■ ■ ■ ■ ■
    docs/docs/databases/database-config/index.mdx
    skipped 71 lines
    72 72  exposed as part of the Hasura Metadata)_ as well as to allow configuring different databases in different
    73 73  environments _(like staging or production)_ easily.
    74 74   
    75  -A database can be connected to using the `HASURA_GRAPHQL_DATABASE_URL` environment variable as well in which case it
    76  -gets added automatically as a database named `default`.
    77  - 
    78 75  ### Allow connections from the Hasura Cloud IP {#cloud-projects-create-allow-nat-ip}
    79 76   
    80 77  When using Hasura Cloud, you may need to adjust your connection settings of your database provider to allow
    skipped 33 lines
    114 111  exposed as part of the Hasura Metadata)_ as well as to allow configuring different databases in different
    115 112  environments _(like staging or production)_ easily.
    116 113   
    117  -A database can be connected to using the `HASURA_GRAPHQL_DATABASE_URL` environment variable as well in which case it
    118  -gets added automatically as a database named default.
    119 114   
    120 115  </TabItem>
    121 116  </Tabs>
    skipped 5 lines
    127 122  <TabItem value="cli" label="CLI">
    128 123   
    129 124  In your `config v3` project, head to the `/metadata/databases/databases.yaml` file and add the database configuration as
    130  -below. If you're using the `HASURA_GRAPHQL_DATABASE_URL` environment variable then the database will get automatically
    131  -added and named default.
     125 +below.
    132 126   
    133 127  ```yaml
    134 128  - name: <db_name>
    skipped 63 lines
    198 192  When using Hasura Cloud, Metadata is stored for you in separate data storage to your connected database(s). When
    199 193  using Docker, if you want to
    200 194  [store the Hasura Metadata on a separate database](/deployment/graphql-engine-flags/reference.mdx#metadata-database-url),
    201  -you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use. By default, the
    202  -Hasura Metadata is stored on the same database as specified in the `HASURA_GRAPHQL_DATABASE_URL` environment variable.
     195 +you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use.
    203 196   
    204 197  ## Connect different Hasura instances to the same database
    205 198   
    skipped 17 lines
  • ■ ■ ■ ■
    docs/docs/databases/mariadb/cloud.mdx
    skipped 24 lines
    25 25  :::tip Supported versions:
    26 26   
    27 27  1. Hasura GraphQL Engine `v2.24.0` onwards
    28  -2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
     28 +2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
    29 29   Amazon Aurora, Digital Ocean and SkySQL.
    30 30   
    31 31  :::
    skipped 76 lines
  • ■ ■ ■ ■
    docs/docs/databases/mariadb/docker.mdx
    skipped 27 lines
    28 28  :::tip Supported versions:
    29 29   
    30 30  1. Hasura GraphQL Engine `v2.24.0` onwards
    31  -2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
     31 +2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
    32 32   Amazon Aurora, Digital Ocean and SkySQL.
    33 33   
    34 34  :::
    skipped 187 lines
  • ■ ■ ■ ■ ■
    docs/docs/databases/mariadb/index.mdx
    skipped 27 lines
    28 28  - In Hasura Cloud, check out our [Getting Started with MariaDB in Hasura Cloud](/databases/mariadb/cloud.mdx) guide
    29 29  - In a Docker environment, check out our [Getting Started with Docker](/databases/mariadb/docker.mdx) guide
    30 30   
     31 +:::info Using Kubernetes?
     32 + 
     33 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     34 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     35 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     36 + 
     37 +:::
     38 + 
    31 39  :::tip Supported versions:
    32 40   
    33 41  1. Hasura GraphQL Engine `v2.24.0` onwards
    34  -2. Hasura supports most databases with standard implementations of **MariaDB 10.5 and higher** including: Amazon RDS,
     42 +2. Hasura supports most databases with standard implementations of **MariaDB 10.6 and higher** including: Amazon RDS,
    35 43   Amazon Aurora, Digital Ocean and SkySQL.
    36 44   
    37 45  :::
    skipped 178 lines
    216 224   
    217 225  :::info Console support
    218 226   
    219  -We recommend using your preferred MariaDB client instead. The Hasura Console is designed to be a tool for managing
    220  -your GraphQL API, and not a full-fledged database management tool.
     227 +We recommend using your preferred MariaDB client instead. The Hasura Console is designed to be a tool for managing your
     228 +GraphQL API, and not a full-fledged database management tool.
    221 229   
    222 230  :::
    223 231   
    skipped 4 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/ms-sql-server/getting-started/index.mdx
    skipped 14 lines
    15 15   
    16 16  Here are 2 ways you can get started with Hasura:
    17 17   
    18  -1. [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx) : You'll need to be able to access your SQL Server database from Hasura Cloud.
    19  -2. [Docker](/databases/ms-sql-server/getting-started/docker.mdx): Run Hasura with Docker and then connect your SQL Server database to Hasura.
     18 +1. [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx): You'll need to be able to access your SQL Server
     19 + database from Hasura Cloud.
     20 +2. [Docker](/databases/ms-sql-server/getting-started/docker.mdx): Run Hasura with Docker and then connect your SQL
     21 + Server database to Hasura.
    20 22   
    21  -<!--
    22  -- [Hasura Cloud](/databases/ms-sql-server/getting-started/cloud.mdx)
    23  -- [Docker](/databases/ms-sql-server/getting-started/docker.mdx)
    24  --->
     23 +:::info Using Kubernetes?
     24 + 
     25 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     26 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     27 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     28 + 
     29 +:::
     30 + 
  • ■ ■ ■ ■ ■
    docs/docs/databases/mysql/index.mdx
    skipped 29 lines
    30 30  - In Hasura Cloud, check out our [Getting Started with MySQL in Hasura Cloud](/databases/mysql/cloud.mdx) guide
    31 31  - In a Docker environment, check out our [Getting Started with Docker](/databases/mysql/docker.mdx) guide
    32 32   
     33 +:::info Using Kubernetes?
     34 + 
     35 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     36 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     37 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     38 + 
     39 +:::
     40 + 
    33 41  :::tip Supported versions:
    34 42   
    35 43  1. Hasura GraphQL Engine `v2.24.0` onwards
    skipped 183 lines
    219 227   
    220 228  :::info Console support
    221 229   
    222  -We recommend using your preferred MySQL client instead. The Hasura Console is designed to be a tool for managing
    223  -your GraphQL API, and not a full-fledged database management tool.
     230 +We recommend using your preferred MySQL client instead. The Hasura Console is designed to be a tool for managing your
     231 +GraphQL API, and not a full-fledged database management tool.
    224 232   
    225 233  :::
    226 234   
    skipped 7 lines
  • ■ ■ ■ ■ ■
    docs/docs/databases/oracle/index.mdx
    skipped 28 lines
    29 29  - In Hasura Cloud, check out our [Getting Started with Oracle in Hasura Cloud](/databases/oracle/cloud.mdx) guide
    30 30  - In a Docker environment, check out our [Getting Started with Docker](/databases/oracle/docker.mdx) guide
    31 31   
     32 +:::info Using Kubernetes?
     33 + 
     34 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     35 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     36 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     37 + 
     38 +:::
     39 + 
    32 40  :::tip Supported versions
    33 41   
    34 42  1. Hasura GraphQL Engine `v2.24.0` onwards
    skipped 181 lines
    216 224   
    217 225  :::info Console support
    218 226   
    219  -We recommend using your preferred Oracle client instead. The Hasura Console is designed to be a tool for managing
    220  -your GraphQL API, and not a full-fledged database management tool.
     227 +We recommend using your preferred Oracle client instead. The Hasura Console is designed to be a tool for managing your
     228 +GraphQL API, and not a full-fledged database management tool.
    221 229   
    222 230  :::
     231 + 
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/quickstart.mdx
    skipped 84 lines
    85 85  <TabItem value="cli" label="CLI">
    86 86   
    87 87  In your `config v3` project, head to the `/metadata/databases/databases.yaml` file and add the database configuration as
    88  -below. If you're using the `HASURA_GRAPHQL_DATABASE_URL` environment variable then the database will get automatically
    89  -added and named default.
     88 +below.
    90 89   
    91 90  ```yaml
    92 91  - name: <db_name>
    skipped 161 lines
    254 253  When using Hasura Cloud, Metadata is stored for you in separate data storage to your connected database(s). When using
    255 254  Docker, if you want to
    256 255  [store the Hasura Metadata on a separate database](/deployment/graphql-engine-flags/reference.mdx#metadata-database-url),
    257  -you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use. By default, the Hasura
    258  -Metadata is stored on the same database as specified in the `HASURA_GRAPHQL_DATABASE_URL` environment variable.
     256 +you can use the `HASURA_GRAPHQL_METADATA_DATABASE_URL` env var to specify which database to use.
    259 257   
    260 258  ## Connect different Hasura instances to the same database
    261 259   
    skipped 16 lines
  • ■ ■ ■ ■ ■
    docs/docs/databases/redshift/getting-started/index.mdx
    1 1  ---
    2 2  slug: index
     3 +keywords:
    3 4   - hasura
    4 5   - docs
    5 6   - databases
    skipped 21 lines
    27 28   
    28 29  Here are 2 ways you can get started with Hasura:
    29 30   
    30  -1. [Hasura Cloud](/databases/redshift/getting-started/cloud.mdx) : You'll need to be able to access your Amazon Redshift
    31  - service from Hasura Cloud.
    32  -2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon Redshift
    33  - service to Hasura.
     31 +1. [Hasura Cloud](/databases/redshift/getting-started/cloud.mdx) : You'll need to be able to access your Amazon
     32 + Redshift service from Hasura Cloud.
     33 +2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon
     34 + Redshift service to Hasura.
     35 + 
     36 +:::info Using Kubernetes?
     37 + 
     38 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     39 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     40 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     41 + 
     42 +:::
    34 43   
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/snowflake/getting-started/index.mdx
    skipped 18 lines
    19 19  2. [Docker](/databases/snowflake/getting-started/docker.mdx): Run Hasura with Docker and then connect your Snowflake
    20 20   service to Hasura.
    21 21   
     22 +:::info Using Kubernetes?
     23 + 
     24 +We have Helm charts available for deploying Hasura on Kubernetes. Check out
     25 +[more information here](/deployment/deployment-guides/kubernetes-helm.mdx) and see the
     26 +[`enterprise-stack` here](https://github.com/hasura/helm-charts/tree/main/charts/hasura-enterprise-stack).
     27 + 
     28 +:::
     29 + 
  • ■ ■ ■ ■
    docs/docs/databases/vector-databases/weaviate.mdx
    skipped 102 lines
    103 103  | Database Name | The name of your Weaviate database. |
    104 104  | `apiKey` | The API key for your Weaviate database. |
    105 105  | `host` | The URL of your Weaviate database. |
    106  -| `openAPIKey` | The OpenAI key for use with your Weaviate database. |
     106 +| `openAIKey` | The OpenAI key for use with your Weaviate database. |
    107 107  | `scheme` | The URL scheme for your Weaviate database (http/https). |
    108 108   
    109 109  :::info Where can I find these parameters?
    skipped 72 lines
  • ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/azure-container-instances-postgres.mdx
    skipped 149 lines
    150 150   --dns-name-label "<dns-name-label>" \
    151 151   --ports 80 \
    152 152   --environment-variables "HASURA_GRAPHQL_SERVER_PORT"="80" "HASURA_GRAPHQL_ENABLE_CONSOLE"="true" "HASURA_GRAPHQL_ADMIN_SECRET"="<admin-secret>"\
    153  - --secure-environment-variables "HASURA_GRAPHQL_DATABASE_URL"="<database-url>"
     153 + --secure-environment-variables "HASURA_METADATA_DATABASE_URL"="<database-url>" "PG_DATABASE_URL"="<database-url>"
    154 154  ```
    155 155   
    156 156  `<database-url>` should be replaced by the following format:
    skipped 2 lines
    159 159  postgres://hasura%40<server_name>:<server_admin_password>@<hostname>:5432/hasura
    160 160  ```
    161 161   
    162  -If you'd like to connect to an existing database, use that server's database url.
     162 +If you'd like to connect to an existing database, use that server's database url. Hasura requires a Postgres database
     163 +to store its metadata. You can use the same database for both Hasura and the application data, or you can use a separate
     164 +database for Hasura's metadata.
    163 165   
    164 166  :::info Note
    165 167   
    skipped 30 lines
    196 198   "HASURA_GRAPHQL_ENABLE_CONSOLE"="true" \
    197 199   "HASURA_GRAPHQL_ADMIN_SECRET"="<admin-secret>" \
    198 200   "HASURA_GRAPHQL_JWT_SECRET"= \ "{\"type\": \"RS512\",\"key\": \"-----BEGIN CERTIFICATE-----\\nMIIDBzCCAe+gAwIBAgIJTpEEoUJ/bOElMA0GCSqGSIb3DQEBCwUAMCExHzAdBgNV\\nBAMTFnRyYWNrLWZyOC51cy5hdXRoMC5jb20wHhcNMjAwNzE3MDYxMjE4WhcNMzQw\\nMzI2MDYxMjE4WjAhMR8wHQYDVQQDExZ0cmFjay1mcjgudXMuYXV0aDAuY29tMIIB\\nIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuK9N9FWK1hEPtwQ8ltYjlcjF\\nX03jhGgUKkLCLxe8q4x84eGJPmeHpyK+iZZ8TWaPpyD3fk+s8BC3Dqa/Sd9QeOBh\\nZH/YnzoB3yKqF/FruFNAY+F3LUt2P2t72tcnuFg4Vr8N9u8f4ESz7OHazn+XJ7u+\\ncuqKulaxMI4mVT/fGinCiT4uGVr0VVaF8KeWsF/EJYeZTiWZyubMwJsaZ2uW2U52\\n+VDE0RE0kz0fzYiCCMfuNNPg5V94lY3ImcmSI1qSjUpJsodqACqk4srmnwMZhICO\\n14F/WUknqmIBgFdHacluC6pqgHdKLMuPnp37bf7ACnQ/L2Pw77ZwrKRymUrzlQID\\nAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSOG3E+4lHiI+l0i91u\\nxG2Rca2NATAOBgNVHQ8BAf8EBAMCAoQwDQYJKoZIhvcNAQELBQADggEBAKgmxr6c\\nYmSNJOTPtjMFFDZHHX/7iwr+vqzC3nalr6ku8E3Zs0/IpwAtzqXp0eVVdPCWUY3A\\nQCUTt63GrqshBHYAxTbT0rlXFkqL8UkJvdZQ3XoQuNsqcp22zlQWGHxsk3YP97rn\\nltPI56smyHqPj+SBqyN/Vs7Vga9G8fHCfltJOdeisbmVHaC9WquZ9S5eyT7JzPAC\\n5dI5ZUunm0cgKFVbLfPr7ykClTPy36WdHS1VWhiCyS+rKeN7KYUvoaQN2U3hXesL\\nr2M+8qaPOSQdcNmg1eMNgxZ9Dh7SXtLQB2DAOuHe/BesJj8eRyENJCSdZsUOgeZl\\nMinkSy2d927Vts8=\\n-----END CERTIFICATE-----\"}"
    199  - --secure-environment-variables "HASURA_GRAPHQL_DATABASE_URL"="<database-url>"
     201 + --secure-environment-variables "HASURA_METADATA_DATABASE_URL"="<database-url>" "PG_DATABASE_URL"="<database-url>"
    200 202  ```
     203 + 
     204 +Above, we're using the `--secure-environment-variables` flag to pass two environment variables that contain sensitive
     205 +information. The `--secure-environment-variables` flag ensures that the values of these variables are encrypted at rest
     206 +and in transit. Hasura uses the `HASURA_METADATA_DATABASE_URL` variable to store its metadata and the `PG_DATABASE_URL`
     207 +variable to connect to the database. These can be the same database or different databases.
    201 208   
    202 209  :::info Note
    203 210   
    skipped 70 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/digital-ocean-one-click.mdx
    skipped 283 lines
    284 284  vim docker-compose.yaml
    285 285   
    286 286  ...
    287  -# change the url to use a different database
    288  -HASURA_GRAPHQL_DATABASE_URL: <database-url>
     287 +# change the url to use a different database for your metadata
     288 +HASURA_METADATA_DATABASE_URL: <database-url>
     289 +# and here for your data using the same or different database as above
     290 +PG_DATABASE_URL: <database-url>
    289 291  ...
    290 292   
    291 293  # type ESC followed by :wq to save and quit
    skipped 77 lines
  • ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/flightcontrol.mdx
    skipped 55 lines
    56 56  ### Step 3: Configure your database
    57 57   
    58 58  In the database section, Set the `Env Variable Name for Connection String` in Database settings to be
    59  -`HASURA_GRAPHQL_DATABASE_URL` and choose a region:
     59 +`HASURA_METADATA_DATABASE_URL` and choose a region:
    60 60   
    61 61  <Thumbnail src="/img/deployment/flightcontrol-env-variable-name.png" alt="flightcontol enable console" />
    62 62   
    skipped 15 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/google-cloud-run-cloud-sql.mdx
     1 +---
     2 +description: Step-by-step guide to deploy Hasura GraphQL Engine on Google Cloud Run with Cloud SQL for Postgres
     3 +title: 'Deploy Hasura GraphQL Engine on Google Cloud Run'
     4 +keywords:
     5 + - hasura
     6 + - google cloud run
     7 + - cloud sql
     8 + - deployment
     9 + - graphql
     10 +sidebar_position: 13
     11 +sidebar_label: Using Google Cloud Run & Cloud SQL
     12 +---
     13 + 
     14 +# Deploying Hasura GraphQL Engine on Cloud Run
     15 + 
     16 +To deploy Hasura GraphQL Engine on Google Cloud Run with a Cloud SQL (Postgres) instance and ensure secure communication
     17 +via private IP, follow this detailed guide.
     18 + 
     19 +:::info Prerequisites
     20 + 
     21 +This guide assumes you have a [Google Cloud](https://cloud.google.com/?hl=en) account and `gcloud` [installed](https://cloud.google.com/sdk/docs/install). Additionally, you should be working within a Google Cloud Project, whether it's one you've newly created or an existing project you have access to.
     22 +:::
     23 + 
     24 + 
     25 +## Step 1: Setup Your Environment
     26 + 
     27 +1. **Authenticate with Google Cloud:**
     28 + 
     29 +```bash
     30 +gcloud auth login
     31 +```
     32 + 
     33 +2. **Set your project ID:**
     34 + 
     35 +Replace `<PROJECT_ID>` with your actual Google Cloud project ID.
     36 + 
     37 +```bash
     38 +gcloud config set project <PROJECT_ID>
     39 +```
     40 + 
     41 +## Step 2: Enable Required Google Cloud Services
     42 + 
     43 +Enable Cloud Run, Cloud SQL, Cloud SQL Admin, Secret Manager, and the Service Networking APIs:
     44 + 
     45 + 
     46 +```bash
     47 +gcloud services enable run.googleapis.com sqladmin.googleapis.com servicenetworking.googleapis.com secretmanager.googleapis.com
     48 +```
     49 + 
     50 +:::caution Requires IAM permissions
     51 + 
     52 +To execute the above command, your Google Cloud account needs to have the Service Usage Admin role (roles/serviceusage.serviceUsageAdmin) or an equivalent custom role with permissions to enable services. This role allows you to view, enable, and disable services in your GCP project.
     53 + 
     54 +If you encounter permissions errors, contact your GCP administrator to ensure your account has the appropriate roles assigned, or to request the services be enabled on the project you are working with.
     55 + 
     56 +:::
     57 + 
     58 +## Step 3: Create a Cloud SQL (Postgres) Instance
     59 + 
     60 +1. **Create the database instance:**
     61 + 
     62 +```bash
     63 +gcloud sql instances create hasura-postgres --database-version=POSTGRES_15 --cpu=2 --memory=7680MiB --region=us-central1
     64 +```
     65 + 
     66 +2. **Set the password** for the default postgres user:
     67 + 
     68 +Replace `<PASSWORD>` with your desired password.
     69 + 
     70 +```bash
     71 +gcloud sql users set-password postgres --instance=hasura-postgres --password=<PASSWORD>
     72 +```
     73 + 
     74 +3. **Create a database**
     75 + 
     76 +Replace `<DATABASE_NAME>` with your database name:
     77 + 
     78 +```bash
     79 +gcloud sql databases create <DATABASE_NAME> --instance=hasura-postgres
     80 +```
     81 + 
     82 +:::info Don't have a `default` network?
     83 + 
     84 +The `default` network is normally created inside a Google Cloud Platform Project, however in some cases the `default` network might have been deleted or the project may have been set up with a specific network configuration without a default network.
     85 + 
     86 +To see the networks you have available you can run:
     87 + 
     88 +```bash
     89 +gcloud compute networks list
     90 +```
     91 + 
     92 +If you find you do not have an appropriate network for your deployment, you can create a new VPC network by running the following command to create a network named `default`:
     93 + 
     94 +```bash
     95 +gcloud compute networks create default --subnet-mode=auto
     96 +```
     97 + 
     98 +:::
     99 + 
     100 + 
     101 +## Step 4: Configure Service Networking for Private Connectivity
     102 + 
     103 +1. **Allocate an IP range** for Google services in your VPC:
     104 + 
     105 +```bash
     106 +gcloud compute addresses create google-managed-services-default \
     107 + --global \
     108 + --purpose=VPC_PEERING \
     109 + --prefix-length=24 \
     110 + --network=default
     111 +```
     112 + 
     113 +2. **Connect your VPC to the Service Networking API:**
     114 + 
     115 +Replace `<PROJECT_ID>` with your actual Google Cloud project ID.
     116 + 
     117 +```bash
     118 +gcloud services vpc-peerings connect \
     119 + --service=servicenetworking.googleapis.com \
     120 + --ranges=google-managed-services-default \
     121 + --network=default \
     122 + --project=<PROJECT_ID>
     123 +```
     124 + 
     125 +3. **Enable a private IP** for your CloudSQL instance:
     126 + 
     127 +```bash
     128 +gcloud sql instances patch hasura-postgres --network=default
     129 +```
     130 + 
     131 +## Step 5: Create your connection string
     132 + 
     133 +1. **Find your Cloud SQL instance's connection name:**
     134 + 
     135 +```bash
     136 +gcloud sql instances describe hasura-postgres
     137 +```
     138 + 
     139 +:::info Note
     140 + 
     141 +Take note of the `connectionName` field in the output of the above `describe` command. You will use the `connectionName` to deploy the GraphQL Engine to Cloud Run.
     142 + 
     143 +:::
     144 + 
     145 +2. **Construct your connection string**
     146 + 
     147 +You can create the connection string by filling in the following template string. Replace `<CONNECTION_NAME>`, `<PASSWORD>`, and `<DATABASE_NAME>` with your actual connectionName, database password, and
     148 +database name.
     149 + 
     150 +```
     151 +postgres://postgres:<PASSWORD>@/<DATABASE_NAME>?host=/cloudsql/<CONNECTION_NAME>
     152 +```
     153 + 
     154 +## Step 6: Store your connection string in the Secret Manager
     155 + 
     156 +While you can put the connection string directly into the environment variables, it is recommended that you store it and any secrets or credentials inside of [Google's Secret Manager](https://cloud.google.com/security/products/secret-manager) for maximum security. This prevents secrets from being visible to administrators and from being accessible in other parts of the control/operations plane.
     157 + 
     158 +1. **Store the constructed connection string as a secret** replacing `<CONNECTION_STRING>` with your actual connection string.
     159 + 
     160 +```bash
     161 +echo -n "<CONNECTION_STRING>" | gcloud secrets create hasura-db-connection-string --data-file=-
     162 +```
     163 + 
     164 +:::info Not using the `default` service account?
     165 + 
     166 +The following steps assume that you are running the `gcloud deploy` command via the default service account used by compute engine. If you are not using the default service account, you will need to grant the service account you are using the `roles/secretmanager.secretAccessor` role.
     167 + 
     168 +:::
     169 + 
     170 + 
     171 +2. **To get the `<PROJECT_NUMBER>` associated with the default service account:**
     172 + 
     173 +```bash
     174 +echo "$(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')"
     175 +```
     176 + 
     177 +3. **Run the following command to grant the default service acount access to the secrets**, replacing `<PROJECT_NUMBER>` with your project number from the previous command:
     178 + 
     179 + ```bash
     180 + gcloud projects add-iam-policy-binding <PROJECT_NUMBER> \
     181 + --member='serviceAccount:<PROJECT_NUMBER>[email protected]' \
     182 + --role='roles/secretmanager.secretAccessor'
     183 + ```
     184 + 
     185 +## Step 7: Deploy Hasura to Cloud Run:
     186 + 
     187 +1. **Run the following command** and replace `<CONNECTION_NAME>`, with your actual connectionName.
     188 + 
     189 +For additional information on configuring the Hasura GraphQL engine, please see the [Server configuration reference](https://hasura.io/docs/latest/deployment/graphql-engine-flags/reference/).
     190 + 
     191 +```bash
     192 +gcloud run deploy hasura-graphql-engine \
     193 + --image=hasura/graphql-engine:latest \
     194 + --add-cloudsql-instances=<CONNECTION_NAME> \
     195 + --update-env-vars='HASURA_GRAPHQL_ENABLE_CONSOLE=true' \
     196 + --update-secrets=HASURA_GRAPHQL_DATABASE_URL=hasura-db-connection-string:latest \
     197 + --region=us-central1 \
     198 + --cpu=1 \
     199 + --min-instances=1 \
     200 + --memory=2048Mi \
     201 + --port=8080 \
     202 + --allow-unauthenticated
     203 +```
     204 + 
     205 + 
     206 +## Step 8: Adding a VPC-Connector (Optional)
     207 + 
     208 +To further enhance the connectivity and security of your Hasura GraphQL Engine deployment on Google Cloud Run,
     209 +especially when connecting to other services within your Virtual Private Cloud (VPC), you might consider adding a
     210 +Serverless VPC Access connector. This optional step is particularly useful when your architecture requires direct access
     211 +from your serverless Cloud Run service to resources within your VPC, such as VMs, other databases, or private services
     212 +that are not exposed to the public internet. For more information, please see [Google's official documentation for Serverless VPC Access](https://cloud.google.com/vpc/docs/serverless-vpc-access).
     213 + 
     214 +1. **Enable the Serverless VPC Access API**
     215 + 
     216 +First ensure that the Serverless VPC Access API is enabled:
     217 + 
     218 +```bash
     219 +gcloud services enable vpcaccess.googleapis.com
     220 +```
     221 + 
     222 +2. **Create a Serverless VPC Access Connector**
     223 + 
     224 +Choose an IP range that does not overlap with existing ranges in your VPC. This range will be used by the connector to
     225 +route traffic from your serverless application to your VPC. **It's important to ensure that the IP range does not overlap with other subnets to avoid routing conflicts.**
     226 + 
     227 +```bash
     228 +gcloud compute networks vpc-access connectors create hasura-connector \
     229 + --region=us-central1 \
     230 + --network=default \
     231 + --range=10.8.0.0/28
     232 +```
     233 + 
     234 +3. **Update the Cloud Run Deployment to use the VPC Connector**
     235 + 
     236 +When deploying or updating your Hasura GraphQL Engine service, specify the VPC connector with the `--vpc-connector`
     237 +flag:
     238 + 
     239 +```bash
     240 +gcloud run deploy hasura-graphql-engine \
     241 + --image=hasura/graphql-engine:latest \
     242 + --add-cloudsql-instances=<CONNECTION_NAME> \
     243 + --update-env-vars='HASURA_GRAPHQL_ENABLE_CONSOLE=true' \
     244 + --update-secrets=HASURA_GRAPHQL_DATABASE_URL=hasura-db-connection-string:latest \
     245 + --vpc-connector=hasura-connector \
     246 + --region=us-central1 \
     247 + --cpu=1 \
     248 + --min-instances=1 \
     249 + --memory=2048Mi \
     250 + --port=8080 \
     251 + --allow-unauthenticated
     252 +```
     253 + 
     254 +### When and Why to Use a VPC Connector
     255 + 
     256 +* **Enhanced Security:** Utilize a VPC Connector when you need to ensure that traffic between your Cloud Run service and
     257 + internal Google Cloud resources does not traverse the public internet, enhancing security.
     258 +* **Access to Internal Resources:** Use it when your serverless application needs access to resources within your VPC,
     259 + such
     260 + as internal APIs, databases, or services that are not publicly accessible.
     261 +* **Compliance Requirements:** If your application is subject to compliance requirements that mandate data and network
     262 + traffic must remain within a private network, a VPC connector facilitates this by providing private access to your
     263 + cloud resources.
     264 +* **Network Peering:** It's beneficial when accessing services in a peered VPC, allowing your Cloud Run services to
     265 + communicate with resources across VPC networks.
     266 + 
     267 +Adding a VPC Connector to your Cloud Run deployment ensures that your Hasura GraphQL Engine can securely and privately
     268 +access the necessary Google Cloud resources within your VPC, providing a robust and secure environment for your
     269 +applications.
     270 + 
     271 +## Tearing Down
     272 + 
     273 +To avoid incurring charges, delete the resources once you're done:
     274 + 
     275 +```bash
     276 +gcloud sql instances delete hasura-postgres
     277 +gcloud run services delete hasura-graphql-engine
     278 +gcloud compute addresses delete google-managed-services-default --global
     279 +gcloud secrets delete hasura-db-connection-string
     280 +```
     281 + 
     282 +If you performed the optional Step 8, you should also delete the VPC-connector resource:
     283 + 
     284 +```bash
     285 +gcloud compute networks vpc-access connectors delete hasura-connector --region=us-central1
     286 +```
  • ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/index.mdx
    skipped 42 lines
    43 43  - [Deploy using Nhost One-click Deploy with Managed PostgreSQL, Storage, and Auth](/deployment/deployment-guides/nhost-one-click.mdx)
    44 44  - [Deploy using Koyeb Serverless Platform](/deployment/deployment-guides/koyeb.mdx)
    45 45  - [Deploy using Flightcontrol on AWS Fargate](/deployment/deployment-guides/flightcontrol.mdx)
     46 +- [Deploy using Google Cloud Run with Cloud SQL](/deployment/deployment-guides/google-cloud-run-cloud-sql.mdx)
    46 47   
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/koyeb.mdx
    skipped 31 lines
    32 32   
    33 33  [![Deploy to Koyeb](https://www.koyeb.com/static/images/deploy/button.svg)](https://app.koyeb.com/deploy?name=hasura-demo&type=docker&image=hasura/graphql-engine&env[HASURA_GRAPHQL_DATABASE_URL]=CHANGE_ME&env[HASURA_GRAPHQL_ENABLE_CONSOLE]=true&env[HASURA_GRAPHQL_ADMIN_SECRET]=CHANGE_ME&ports=8080;http;/)
    34 34   
    35  -On the configuration screen, set the `HASURA_GRAPHQL_DATABASE_URL` environment variable to the connection string for your database and the `HASURA_GRAPHQL_ADMIN_SECRET` environment variable to a secret value to access the Hasura Console.
     35 +On the configuration screen, set the `HASURA_METADATA_DATABASE_URL` (depicted as `HASURA_GRAPHQL_ENGINE_DATABASE_URL` in this screenshot) environment variable to the connection string for your database and the `HASURA_GRAPHQL_ADMIN_SECRET` environment variable to a secret value to access the Hasura Console.
    36 36   
    37 37  Click the **Deploy** button when you are finished. When the deployment completes, you can [access the Hasura Console](#access-the-hasura-console).
    38 38   
    skipped 13 lines
    52 52   
    53 53  4. In the **Environment variables** section, configure the environment variables required to properly run the Hasura GraphQL Engine:
    54 54   
    55  - - `HASURA_GRAPHQL_DATABASE_URL`: The environment variable containing the PostgreSQL URL, i.e. `postgres://<user>:<password>@<host>:<port>/<database>`. Since this value contains sensitive information, select the "Secret" type. Secrets are encrypted at rest and are ideal for storing sensitive data like API keys, OAuth tokens, etc. Choose "Create secret" in the "Value" drop-down menu and enter the secret value in the "Create secret" form.
     55 + - `HASURA_METADATA_DATABASE_URL`: Hasura requires a PostgreSQL database to store its metadata. This can be the same database as `PG_DATABASE_URL` or a different one. We strongly recommend using a secret to store this value.
     56 + - `PG_DATABASE_URL`: The environment variable containing the PostgreSQL URL, i.e. `postgres://<user>:<password>@<host>:<port>/<database>`. Since this value contains sensitive information, select the "Secret" type. Secrets are encrypted at rest and are ideal for storing sensitive data like API keys, OAuth tokens, etc. Choose "Create secret" in the "Value" drop-down menu and enter the secret value in the "Create secret" form.
    56 57   - `HASURA_GRAPHQL_ENABLE_CONSOLE`: Set to `true`. This will expose and allow you to access the Hasura Console.
    57  - - `HASURA_GRAPHQL_ADMIN_SECRET`: The secret to access the Hasura Console. As with the `HASURA_GRAPHQL_DATABASE_URL`, we strongly recommend using a secret to store this value.
     58 + - `HASURA_GRAPHQL_ADMIN_SECRET`: The secret to access the Hasura Console. As with the other environment variables, we strongly recommend using a secret to store this value.
    58 59   
    59 60  5. In the **Exposing your service** section, change the `Port` from `80` to `8080` to match the port that the `hasura/graphql-engine` Docker image app listens on. Koyeb uses this setting to perform application health checks and to properly route incoming HTTP requests. If you want the Hasura GraphQL Engine to be available on a specific path, you can change the default one (`/`) to the path of your choice.
    60 61   
    skipped 21 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/deployment-guides/kubernetes.mdx
    skipped 36 lines
    37 37   
    38 38  ```yaml {2}
    39 39  env:
    40  - - name: HASURA_GRAPHQL_DATABASE_URL
     40 + - name: HASURA_METADATA_DATABASE_URL
    41 41   value: postgres://<username>:<password>@hostname:<port>/<dbname>
    42 42  ```
    43 43   
    44  -Examples of `HASURA_GRAPHQL_DATABASE_URL`:
     44 +Examples of `HASURA_METADATA_DATABASE_URL`:
    45 45   
    46 46  - `postgres://admin:password@localhost:5432/my-db`
    47 47  - `postgres://admin:@localhost:5432/my-db` _(if there is no password)_
    skipped 1 lines
    49 49  :::info Note
    50 50   
    51 51  - If your **password contains special characters** (e.g. #, %, $, @, etc.), you need to URL encode them in the
    52  - `HASURA_GRAPHQL_DATABASE_URL` env var (e.g. %40 for @).
     52 + `HASURA_METADATA_DATABASE_URL` env var (e.g. %40 for @).
    53 53   
    54 54   You can check the [logs](#kubernetes-logs) to see if the database credentials are proper and if Hasura is able to
    55 55   connect to the database.
    skipped 48 lines
    104 104   command: ["graphql-engine"]
    105 105   args: ["serve", "--enable-console"]
    106 106   env:
    107  - - name: HASURA_GRAPHQL_DATABASE_URL
     107 + - name: HASURA_METADATA_DATABASE_URL
    108 108   value: postgres://<username>:<password>@hostname:<port>/<dbname>
    109 109   - name: HASURA_GRAPHQL_ADMIN_SECRET
    110 110   value: mysecretkey
    skipped 96 lines
  • ■ ■ ■ ■
    docs/docs/deployment/downgrading.mdx
    skipped 48 lines
    49 49  to run:
    50 50   
    51 51  ```bash
    52  -docker run -e HASURA_GRAPHQL_DATABASE_URL=$DATABASE_URL hasura/graphql-engine:<VERSION> graphql-engine downgrade --to-<NEW-VERSION>
     52 +docker run -e HASURA_METADATA_DATABASE_URL=$DATABASE_URL hasura/graphql-engine:<VERSION> graphql-engine downgrade --to-<NEW-VERSION>
    53 53  ```
    54 54   
    55 55  You need to use a newer version of `graphql-engine` to downgrade to an
    skipped 41 lines
  • ■ ■ ■ ■ ■
    docs/docs/deployment/graphql-engine-flags/config-examples.mdx
    skipped 258 lines
    259 259  ```bash
    260 260  # env var
    261 261  HASURA_GRAPHQL_METADATA_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<metadata-db-name>
    262  -HASURA_GRAPHQL_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
     262 +PG_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
    263 263   
    264 264  # flag
    265 265  --metadata-database-url=postgres://<user>:<password>@<host>:<port>/<metadata-db-name>
    skipped 3 lines
    269 269  In this case, Hasura GraphQL Engine will use the
    270 270  `HASURA_GRAPHQL_METADATA_DATABASE_URL` to store the `metadata catalogue`
    271 271  and starts the server with the database provided in the
    272  -`HASURA_GRAPHQL_DATABASE_URL`.
     272 +`PG_DATABASE_URL`.
    273 273   
    274 274  **2. Only** `metadata database` **is provided to the server**
    275 275   
    skipped 10 lines
    286 286  and starts the server without tracking/managing any database. _i.e_ a
    287 287  Hasura GraphQL server will be started with no database. The user could
    288 288  then manually track/manage databases at a later time.
    289  - 
    290  -**3. Only** `primary database` **is provided to the server**
    291  - 
    292  -```bash
    293  -# env var
    294  -HASURA_GRAPHQL_DATABASE_URL=postgres://<user>:<password>@<host>:<port>/<db-name>
    295  - 
    296  -# flag
    297  ---database-url=postgres://<user>:<password>@<host>:<port>/<db-name>
    298  -```
    299  - 
    300  -In this case, Hasura GraphQL Engine server will start with the database
    301  -provided in the `HASURA_GRAPHQL_DATABASE_URL` and will also use the
    302  -_same database_ to store the `metadata catalogue`.
    303  - 
    304  -**4. Neither** `primary database` **nor** `metadata database` **is
    305  -provided to the server**
    306  - 
    307  -Hasura GraphQL Engine will fail to startup and will throw an error
    308  - 
    309  -```bash
    310  -Fatal Error: Either of --metadata-database-url or --database-url option expected
    311  -```
    312  - 
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/graphql-engine-flags/reference.mdx
    skipped 43 lines
    44 44   
    45 45  :::info Note
    46 46   
    47  -This config option is supported to maintain backwards compatibility with `v1.x` Hasura instances. In versions `v2.0` and
    48  -above, databases can be connected using any custom environment variables of your choice.
     47 +This config option is supported to maintain backwards compatibility with `v1.x` Hasura instances. **In versions `v2.0`
     48 +and above, databases can be connected using any custom environment variables of your choice. Our `docker-compose.yaml`
     49 +files in the install manifests reference `PG_DATABASE_URL` as the environment variable to use for connecting to a
     50 +database, but this can be any plaintext value which does not start with `HASURA_`.**
    49 51   
    50 52  :::
    51 53   
    52 54  ### Metadata Database URL
    53 55   
    54  -This Postgres database URL is used to store Hasura's Metadata. By default, the database configured using
    55  -`HASURA_GRAPHQL_DATABASE_URL` / `--database_url` will be used to store the Metadata. This can also be a URI of the form
     56 +This Postgres database URL is used to store Hasura's Metadata. This can also be a URI of the form
    56 57  `dynamic-from-file:///path/to/file`, where the referenced file contains a postgres connection string, which will be read
    57 58  dynamically every time a new connection is established. This allows the server to be used in an environment where
    58 59  secrets are rotated frequently.
    skipped 9 lines
    68 69   
    69 70  :::info Note
    70 71   
    71  -Either one of the Metadata Database URL or the Database URL needs to be provided for Hasura to start.
     72 +THe metadata database URL needs to be set for Hasura to start.
    72 73   
    73 74  :::
    74 75   
    skipped 311 lines
    386 387  | **Default** | `false` |
    387 388  | **Supported in** | CE, Enterprise Edition, Cloud |
    388 389   
    389  -### Header Size Limit
     390 + 
     391 +### Enable Automated Persisted Queries
     392 + 
     393 +Enables the [Automated Persisted Queries](https://www.apollographql.com/docs/apollo-server/performance/apq/) feature.
     394 + 
     395 +| | |
     396 +| ------------------- | ------------------------------------------------ |
     397 +| **Flag** | `--enable-persisted-queries` |
     398 +| **Env var** | `HASURA_GRAPHQL_ENABLE_PERSISTED_QUERIES` |
     399 +| **Accepted values** | Boolean |
     400 +| **Default** | `false` |
     401 +| **Supported in** | Enterprise Edition |
    390 402   
    391  -Sets the maximum cumulative length of all headers in bytes.
     403 +### Set Automated Persisted Queries TTL
    392 404   
    393  -| | |
    394  -| ------------------- | ---------------------------------------- |
    395  -| **Flag** | `--max-total-header-length` |
    396  -| **Env var** | `HASURA_GRAPHQL_MAX_TOTAL_HEADER_LENGTH` |
    397  -| **Accepted values** | Integer |
    398  -| **Default** | `1024*1024` (1MB) |
    399  -| **Supported in** | CE, Enterprise Edition |
     405 +Sets the query TTL in the cache store for Automated Persisted Queries.
     406 + 
     407 +| | |
     408 +| ------------------- | ------------------------------------------------ |
     409 +| **Flag** | `--persisted-queries-ttl` |
     410 +| **Env var** | `HASURA_GRAPHQL_PERSISTED_QUERIES_TTL` |
     411 +| **Accepted values** | Integer |
     412 +| **Default** | `5` (seconds) |
     413 +| **Supported in** | Enterprise Edition |
     414 + 
    400 415   
    401 416  ### Enable Error Log Level for Trigger Errors
    402 417   
    skipped 7 lines
    410 425  | **Default** | `false` |
    411 426  | **Supported in** | CE, Enterprise Edition |
    412 427   
     428 + 
    413 429  ### Enable Console
    414 430   
    415 431  Enable the Hasura Console (served by the server on `/` and `/console`).
    skipped 6 lines
    422 438  | **Options** | `true` or `false` |
    423 439  | **Default** | **CE**, **Enterprise Edition**: `false` <br />**Cloud**: Console is always enabled |
    424 440  | **Supported in** | CE, Enterprise Edition |
     441 + 
     442 +### Header Size Limit
     443 + 
     444 +Sets the maximum cumulative length of all headers in bytes.
     445 + 
     446 +| | |
     447 +| ------------------- | ---------------------------------------- |
     448 +| **Flag** | `--max-total-header-length` |
     449 +| **Env var** | `HASURA_GRAPHQL_MAX_TOTAL_HEADER_LENGTH` |
     450 +| **Accepted values** | Integer |
     451 +| **Default** | `1024*1024` (1MB) |
     452 +| **Supported in** | CE, Enterprise Edition |
     453 + 
    425 454   
    426 455  ### Enable High-cardinality Labels for Metrics
    427 456   
    skipped 100 lines
    528 557  | **Env var** | `HASURA_GRAPHQL_ENABLED_LOG_TYPES` |
    529 558  | **Accepted values** | String (Comma-separated) |
    530 559  | **Options** | `startup`, `http-log`, `webhook-log`, `websocket-log`, `query-log`, `execution-log`, `livequery-poller-log`, `action-handler-log`, `data-connector-log`, `jwk-refresh-log`, `validate-input-log` |
    531  -| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh` |
     560 +| **Default** | `startup, http-log, webhook-log, websocket-log`, `jwk-refresh-log` |
    532 561  | **Supported in** | CE, Enterprise Edition |
    533 562   
    534 563  ### Events HTTP Pool Size
    skipped 735 lines
  • ■ ■ ■ ■
    docs/docs/enterprise/getting-started/quickstart-google-cloud-run.mdx
    skipped 130 lines
    131 131   --env-vars-file=env.yaml \
    132 132   --vpc-connector=<vpc-connector-name> \
    133 133   --allow-unauthenticated \
    134  - --max-instances=1 \
     134 + --min-instances=1 \
    135 135   --cpu=1 \
    136 136   --memory=2048Mi \
    137 137   --port=8080
    skipped 7 lines
  • ■ ■ ■ ■ ■
    docs/docs/enterprise/sso/adfs.mdx
    skipped 305 lines
    306 306   environment:
    307 307   HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
    308 308   HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
    309  - HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     309 + HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     310 + PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
    310 311   HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
    311 312   HASURA_GRAPHQL_DEV_MODE: 'true'
    312 313   HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log
    skipped 30 lines
  • ■ ■ ■ ■ ■
    docs/docs/enterprise/sso/auth0.mdx
    skipped 426 lines
    427 427   environment:
    428 428   HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
    429 429   HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
    430  - HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     430 + HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     431 + PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
    431 432   HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
    432 433   HASURA_GRAPHQL_DEV_MODE: 'true'
    433 434   HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log
    skipped 27 lines
  • ■ ■ ■ ■ ■
    docs/docs/enterprise/sso/google-workspace.mdx
    skipped 280 lines
    281 281   environment:
    282 282   HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
    283 283   HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
    284  - HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     284 + HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     285 + PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
    285 286   HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
    286 287   HASURA_GRAPHQL_DEV_MODE: 'true'
    287 288   HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log
    skipped 30 lines
  • ■ ■ ■ ■ ■
    docs/docs/enterprise/sso/ldap.mdx
    skipped 402 lines
    403 403   environment:
    404 404   HASURA_GRAPHQL_EE_LICENSE_KEY: <YOUR_EE_LICENSE_KEY>
    405 405   HASURA_GRAPHQL_ADMIN_SECRET: <YOUR_ADMIN_SECRET>
    406  - HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     406 + HASURA_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
     407 + PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres?sslmode=disable
    407 408   HASURA_GRAPHQL_ENABLE_CONSOLE: 'true'
    408 409   HASURA_GRAPHQL_DEV_MODE: 'true'
    409 410   HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup,http-log,webhook-log,websocket-log,query-log
    skipped 29 lines
  • ■ ■ ■ ■
    docs/docs/event-triggers/observability-and-performance.mdx
    skipped 49 lines
    50 50   
    51 51  ## Observability
    52 52   
    53  -<ProductBadge self />
     53 +<ProductBadge self ee />
    54 54   
    55 55  Hasura EE exposes a set of [Prometheus metrics](/observability/enterprise-edition/prometheus/metrics.mdx/#hasura-event-triggers-metrics)
    56 56  that can be used to monitor the Event Trigger system and help diagnose performance issues.
    skipped 118 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/hasura-cli/install-hasura-cli.mdx
    skipped 45 lines
    46 46  You can also install a specific version of the CLI by providing the `VERSION` variable:
    47 47   
    48 48  ```bash
    49  -curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
     49 +curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
    50 50  ```
    51 51   
    52 52  </TabItem>
    skipped 18 lines
    71 71  You can also install a specific version of the CLI by providing the `VERSION` variable:
    72 72   
    73 73  ```bash
    74  -curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
     74 +curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.38.0 bash
    75 75  ```
    76 76   
    77 77  </TabItem>
    skipped 46 lines
  • ■ ■ ■ ■ ■
    docs/docs/hasura-cli/quickstart.mdx
    skipped 116 lines
    117 117  ```
    118 118   
    119 119  We'll enter the name `default` for the ` Database Display Name` field. This name is used to identify the data source in
    120  -Hasura's Metadata and is not your database's name. Should you choose to use the `HASURA_GRAPHQL_DATABASE_URL`
    121  -environment variable instead, `default` is the default name assigned to your data source by Hasura.
     120 +Hasura's Metadata and is not your database's name.
    122 121   
    123 122  Next, we'll choose `Environment Variable` from the `Connect Database Via` options; enter `PG_DATABASE_URL` as the name:
    124 123   
    skipped 157 lines
  • ■ ■ ■ ■
    docs/docs/migrations-metadata-seeds/legacy-configs/config-v2/advanced/auto-apply-migrations.mdx
    skipped 57 lines
    58 58  docker run -p 8080:8080 \
    59 59   -v /home/me/my-project/migrations:/hasura-migrations \
    60 60   -v /home/me/my-project/metadata:/hasura-metadata \
    61  - -e HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:@postgres:5432/postgres \
     61 + -e HASURA_METADATA_DATABASE_URL=postgres://postgres:@postgres:5432/postgres \
    62 62   hasura/graphql-engine:v1.2.0.cli-migrations-v2
    63 63  ```
    64 64   
    skipped 8 lines
  • ■ ■ ■ ■ ■
    docs/docs/migrations-metadata-seeds/legacy-configs/upgrade-v3.mdx
    skipped 240 lines
    241 241  Your project directory and `config.yaml` should be updated to v3.
    242 242   
    243 243  The update script will ask for the name of database the current
    244  -Migrations and seeds correspond to. If you are starting Hasura with a
    245  -`HASURA_GRAPHQL_DATABASE_URL` then the name of the database should be
    246  -`default`.
     244 +Migrations and seeds correspond to.
    247 245   
    248 246  ## Continue using config v2
    249 247   
    skipped 27 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/observability/cloud/newrelic.mdx
    skipped 57 lines
    58 58  | Custom Attributes | Custom Attributes associated with your logs and metrics. A default source tag `hasura-cloud-metrics` is added to all exported logs and metrics. Attributes `project_id` and `project_name` are added to all exported metrics. |
    59 59  | Service Name | The name of the application or service generating the log events. |
    60 60   
     61 +:::info API Key type
     62 + 
     63 +Your API key must be of type `License` in order to export logs and metrics to New Relic.
     64 + 
     65 +:::
     66 + 
    61 67  <Thumbnail src="/img/observability/configure-newrelic.png" alt="Configure New Relic Integration" />
    62 68   
    63 69  After adding appropriate values, click `Save`.
    skipped 70 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/observability/enterprise-edition/prometheus/metrics.mdx
    skipped 43 lines
    44 44   
    45 45  Number of GraphQL requests received, representing the GraphQL query/mutation traffic on the server.
    46 46   
    47  -| | |
    48  -| ------ | -------------------------------------------------------------- |
    49  -| Name | `hasura_graphql_requests_total` |
    50  -| Type | Counter |
    51  -| Labels | `operation_type`: query \| mutation \| subscription \| unknown |
     47 +| | |
     48 +| ------ | -------------------------------------------------------------------------------------------------------------------------------------------------- |
     49 +| Name | `hasura_graphql_requests_total` |
     50 +| Type | Counter |
     51 +| Labels | `operation_type`: query \| mutation \| subscription \| unknown, `response_status`: success \| failed, `operation_name`, `parameterized_query_hash` |
    52 52   
    53 53  The `unknown` operation type will be returned for queries that fail authorization, parsing, or certain validations. The
    54 54  `response_status` label will be `success` for successful requests and `failed` for failed requests.
    skipped 468 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/observability/opentelemetry.mdx
    skipped 37 lines
    38 38  be exported directly from your Hasura instances to your observability tool that supports OpenTelemetry traces. This can
    39 39  be configured in the `Settings` section of the Hasura Console.
    40 40   
     41 +## Available Metrics
     42 + 
     43 +The available OpenTelemetry metrics are the same as those available via
     44 +[Prometheus](/observability/enterprise-edition/prometheus/metrics.mdx).
     45 + 
    41 46  ## Configure the OpenTelemetry receiver
    42 47   
    43 48  :::info Supported from
    skipped 12 lines
    56 61   
    57 62  :::info Traces on Hasura Cloud
    58 63   
    59  -Hasura Cloud implements sampling on traces. That means only one in every `n` traces will be sampled and exported
    60  -(`n` will be automatically configured based on various parameters during runtime. This can't be manually adjusted).
     64 +Hasura Cloud implements sampling on traces. That means only one in every `n` traces will be sampled and exported (`n`
     65 +will be automatically configured based on various parameters during runtime. This can't be manually adjusted).
    61 66   
    62 67  :::
    63 68   
    skipped 240 lines
    304 309   
    305 310  Trace and Span ID are included in the root of the log body. GraphQL Engine follows
    306 311  [OpenTelemetry's data model](https://opentelemetry.io/docs/specs/otel/logs/data-model/#log-and-event-record-definition)
    307  -so that OpenTelemetry-compliant services can automatically correlate logs with Traces. However, some services need
    308  -extra configurations.
     312 +so that OpenTelemetry-compliant services can automatically correlate logs with Traces. However, some services need extra
     313 +configurations.
    309 314   
    310 315  ### Jaeger
    311 316   
    skipped 22 lines
    334 339   filterByTraceID: false
    335 340   filterBySpanID: false
    336 341   customQuery: true
    337  - query: "{exporter=\"OTLP\"} | json | traceid=`$${__span.traceId}`"
     342 + query: '{exporter="OTLP"} | json | traceid=`$${__span.traceId}`'
    338 343   traceQuery:
    339 344   timeShiftEnabled: true
    340 345   spanStartTimeShift: '1h'
    skipped 10 lines
    351 356   
    352 357  ### Datadog
    353 358   
    354  -If Datadog can't correlate between traces and logs, you should verify the Trace ID attributes mapping.
    355  -Read more at [the troubleshooting section](https://docs.datadoghq.com/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel/?tab=jsonlogs#trace-id-option) on Datadog.
     359 +If Datadog can't correlate between traces and logs, you should verify the Trace ID attributes mapping. Read more at
     360 +[the troubleshooting section](https://docs.datadoghq.com/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel/?tab=jsonlogs#trace-id-option)
     361 +on Datadog.
    356 362   
    357 363  <Thumbnail
    358 364   src="/img/enterprise/open-telemetry-datadog-trace-log.png"
    skipped 3 lines
    362 368   
    363 369  ### Honeycomb
    364 370   
    365  -Traces and logs can't correlate together if they are exported to different datasets.
    366  -Note that Honeycomb will use the `service.name` attribute as the dataset where logs are exported.
    367  -Therefore the `x-honeycomb-dataset` header must be matched with that attribute.
     371 +Traces and logs can't correlate together if they are exported to different datasets. Note that Honeycomb will use the
     372 +`service.name` attribute as the dataset where logs are exported. Therefore the `x-honeycomb-dataset` header must be
     373 +matched with that attribute.
    368 374   
    369 375  <Thumbnail
    370 376   src="/img/enterprise/open-telemetry-honeycomb-trace-log.png"
    skipped 4 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/queries/bigquery/variables-aliases-fragments-directives.mdx
    skipped 24 lines
    25 25  **Example:** Fetch an author by their `author_id`:
    26 26   
    27 27  <GraphiQLIDE
    28  - query={`query getArticles($author_id: Int!) {
    29  - bigquery_articles(
    30  - where: { author_id: { _eq: $author_id } }
     28 + query={`query getArticles($author_id: Int!, $title: String!) {
     29 + articles(
     30 + where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
    31 31   ) {
    32 32   id
    33 33   title
    34 34   }
    35 35  }`}
    36  -response={`{
     36 + response={`{
    37 37   "data": {
    38  - "bigquery_articles": [
     38 + "articles": [
    39 39   {
    40  - "id": "15",
     40 + "id": 15,
    41 41   "title": "How to climb Mount Everest"
    42 42   },
    43 43   {
    44  - "id": "6",
     44 + "id": 6,
    45 45   "title": "How to be successful on broadway"
    46 46   }
    47 47   ]
    48 48   }
    49 49  }`}
    50  -variables={`{
    51  - "author_id": 1
     50 + variables={`{
     51 + "author_id": 1,
     52 + "title": "%How to%"
    52 53  }`}
    53 54  />
    54 55   
    skipped 262 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/queries/ms-sql-server/variables-aliases-fragments-directives.mdx
    skipped 23 lines
    24 24  **Example:** Fetch an author by their `author_id`:
    25 25   
    26 26  <GraphiQLIDE
    27  - query={`query getArticles($author_id: Int!) {
     27 + query={`query getArticles($author_id: Int!, $title: String!) {
    28 28   articles(
    29  - where: { author_id: { _eq: $author_id } }
     29 + where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
    30 30   ) {
    31 31   id
    32 32   title
    33 33   }
    34  -}`}
    35  - variables={`{
    36  - "author_id": 1
    37 34  }`}
    38 35   response={`{
    39 36   "data": {
    skipped 8 lines
    48 45   }
    49 46   ]
    50 47   }
     48 +}`}
     49 + variables={`{
     50 + "author_id": 1,
     51 + "title": "%How to%"
    51 52  }`}
    52 53  />
    53 54   
    skipped 262 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/queries/postgres/variables-aliases-fragments-directives.mdx
    skipped 24 lines
    25 25  **Example:** Fetch an author by their `author_id`:
    26 26   
    27 27  <GraphiQLIDE
    28  - query={`query getArticles($author_id: Int!) {
     28 + query={`query getArticles($author_id: Int!, $title: String!) {
    29 29   articles(
    30  - where: { author_id: { _eq: $author_id } }
     30 + where: { author_id: { _eq: $author_id }, title: { _ilike: $title } }
    31 31   ) {
    32 32   id
    33 33   title
    skipped 14 lines
    48 48   }
    49 49  }`}
    50 50  variables={`{
    51  - "author_id": 1
     51 + "author_id": 1,
     52 + "title": "%How to%"
    52 53  }`}
    53 54  />
    54 55   
    skipped 262 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/resources/upgrade-hasura-v2.mdx
    skipped 172 lines
    173 173   with Hasura v2 instances. Hasura v2 will assume the `v2` Metadata and Migrations belong to a database connected with
    174 174   the name `default`.
    175 175   
    176  -- A new optional env var `HASURA_GRAPHQL_METADATA_DATABASE_URL` is now introduced. When set, this Postgres database is
    177  - used to store the Hasura Metadata. If not set, the database set using `HASURA_GRAPHQL_DATABASE_URL` is used to store
    178  - the Hasura Metadata.
    179  - 
    180  - Either one of `HASURA_GRAPHQL_METADATA_DATABASE_URL` or `HASURA_GRAPHQL_DATABASE_URL` needs to be set with a Postgres
    181  - database to start a Hasura v2 instance as Hasura always needs a Postgres database to store its metadata.
    182  - 
    183  -- The database set using the `HASURA_GRAPHQL_DATABASE_URL` env var is connected automatically with the name `default` in
    184  - Hasura v2 while updating an existing instance or while starting a fresh instance.
    185  - 
    186  - Setting this env var post initial setup/update will have no effect as the Hasura Metadata for data sources would
    187  - already have been initialized and the env var will be treated as any other custom env var.
    188  - 
    189  - It is now not mandatory to set this env var if a dedicated `HASURA_GRAPHQL_METADATA_DATABASE_URL` is set.
     176 +- A new mandatory env var `HASURA_GRAPHQL_METADATA_DATABASE_URL` is now introduced and is mandatory for storing Hasura
     177 + Metadata.
    190 178   
    191 179  - Custom env vars can now be used to connect databases dynamically at runtime.
    192 180   
    193 181  - With support for multiple databases, older database specific env vars have been deprecated.
    194 182   [See details](#hasura-v2-env-changes)
    195 183   
     184 +:::info Existing Metadata
     185 + 
     186 +`HASURA_GRAPHQL_METADATA_DATABASE_URL` must be the connection string for where your metadata existed previously.
     187 + 
     188 +:::
     189 + 
    196 190  ## Moving from Hasura v1 to Hasura v2 {#moving-from-hasura-v1-to-v2}
    197 191   
    198 192  ### Hasura v1 and Hasura v2 compatibility {#hasura-v1-v2-compatibility}
    skipped 7 lines
    206 200  Post adding a database named `default`, the Hasura v2 instance should behave equivalently to the Hasura v1 instance and
    207 201  all previous workflows will continue working as they were.
    208 202   
    209  -Refer to [connecting databases](/databases/overview.mdx) to add a database to Hasura v2.
     203 +Refer to [connecting databases](/databases/quickstart.mdx) to add a database to Hasura v2.
    210 204   
    211 205  ### Migrate Hasura v1 instance to Hasura v2
    212 206   
    213 207  Hasura v2 is backwards compatible with Hasura v1. Hence simply updating the Hasura docker image version number and
    214  -restarting your Hasura instance should work seamlessly. The database connected using the `HASURA_GRAPHQL_DATABASE_URL`
    215  -env var will be added as a database with the name `default` automatically and all existing Metadata and Migrations will
    216  -be assumed to belong to it.
     208 +restarting your Hasura instance should work seamlessly.
    217 209   
    218 210  :::info Note
    219 211   
    skipped 62 lines
    282 274  the Hasura Metadata catalogue changes:
    283 275   
    284 276  ```bash
    285  -docker run -e HASURA_GRAPHQL_DATABASE_URL=$POSTGRES_URL hasura/graphql-engine:v2.0.0 graphql-engine downgrade --to-v1.3.3
     277 +docker run -e HASURA_METADATA_DATABASE_URL=$POSTGRES_URL hasura/graphql-engine:v2.0.0 graphql-engine downgrade --to-v1.3.3
    286 278  ```
    287 279   
    288 280  :::info Note
    skipped 5 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/restified/quickstart.mdx
    skipped 21 lines
    22 22  To see an alternative method of creating a REST endpoint from an query in the GraphiQL IDE, check out the
    23 23  [Create RESTified endpoints](/restified/create.mdx#create-from-graphiql) page.
    24 24   
    25  -:::info Data source availability
    26  - 
    27  -Available for **Postgres, MS SQL Server, Citus, AlloyDB and CockroachDB** databases.
    28  - 
    29  -:::
    30  - 
    31 25  <SampleAppBlock dependent />
    32 26   
    33 27  ### Step 1: Navigate to the products table.
    34 28   
    35 29  Navigate to `Data > default > public > products` and click the "Create REST Endpoints" button.
    36 30   
    37  - 
    38  -<Thumbnail
    39  - src="/img/restified/restified-create-from-table-btn.png"
    40  - alt="Create RESTified Endpoint"
    41  -/>
     31 +<Thumbnail src="/img/restified/restified-create-from-table-btn.png" alt="Create RESTified Endpoint" />
    42 32   
    43 33  ### Step 2: Choose operations
    44 34   
    45  -After clicking on the "Create REST endpoints" button, you will see a modal list of all REST operations (`READ`, `READ
    46  - ALL`, `CREATE`, `UPDATE`, `DELETE`) available on the table. Select `READ` and `CREATE` for this demo. Click the
     35 +After clicking on the "Create REST endpoints" button, you will see a modal list of all REST operations (`READ`,
     36 +`READ ALL`, `CREATE`, `UPDATE`, `DELETE`) available on the table. Select `READ` and `CREATE` for this demo. Click the
    47 37  "Create" button.
    48 38   
    49  -<Thumbnail
    50  - src="/img/restified/restified-modal-from-table.png"
    51  - alt="Create RESTified Endpoint"
    52  - width="400px"
    53  -/>
     39 +<Thumbnail src="/img/restified/restified-modal-from-table.png" alt="Create RESTified Endpoint" width="400px" />
    54 40   
    55 41  ### Step 3: View all REST endpoints
    56 42   
    57 43  You will be able to see the newly created REST endpoints listed in the `API > REST` tab.
    58 44   
    59  -<Thumbnail
    60  - src="/img/restified/restified-tracked-table-view.png"
    61  - alt="Create RESTified Endpoint"
    62  - width="1000px"
    63  -/>
     45 +<Thumbnail src="/img/restified/restified-tracked-table-view.png" alt="Create RESTified Endpoint" width="1000px" />
    64 46   
    65 47  ### Step 4: Test the REST endpoint
    66 48   
    67  -Click on the `products_by_pk` title to get to the details page for that RESTified endpoint. In the "Request
    68  -Variables" section for `id` enter the value `7992fdfa-65b5-11ed-8612-6a8b11ef7372`, the UUID for one of the products
    69  -already in the `products` table of the docs sample app. Click "Run Request".
     49 +Click on the `products_by_pk` title to get to the details page for that RESTified endpoint. In the "Request Variables"
     50 +section for `id` enter the value `7992fdfa-65b5-11ed-8612-6a8b11ef7372`, the UUID for one of the products already in the
     51 +`products` table of the docs sample app. Click "Run Request".
    70 52   
    71  -<Thumbnail
    72  - src="/img/restified/restified-test.png"
    73  - alt="Create RESTified Endpoint"
    74  - width="1000px"
    75  -/>
     53 +<Thumbnail src="/img/restified/restified-test.png" alt="Create RESTified Endpoint" width="1000px" />
    76 54   
    77 55  You will see the result returned next to the query.
    78 56   
    79  -You can test the other `insert_products_one` endpoint that we created in the same way by providing a new product
    80  -object as the request variable.
     57 +You can test the other `insert_products_one` endpoint that we created in the same way by providing a new product object
     58 +as the request variable.
    81 59   
    82 60  You can also use your favourite REST client to test the endpoint. For example, using `curl`:
    83 61   
    skipped 8 lines
    92 70  What just happened? Well, you just created two REST endpoints for reading a single product and inserting a product,
    93 71  super fast, and without writing a single line of code 🎉
    94 72   
    95  -This saves you significant time and effort, as you easily enable REST endpoints on your tables or [convert any query
    96  -or mutation into a REST endpoint](/restified/create.mdx) with just a few clicks.
     73 +This saves you significant time and effort, as you easily enable REST endpoints on your tables or
     74 +[convert any query or mutation into a REST endpoint](/restified/create.mdx) with just a few clicks.
    97 75   
    98 76  By using RESTified endpoints, you can take advantage of the benefits of both REST and GraphQL, making your Hasura
    99 77  project even more versatile and powerful. For more details, check out the
    skipped 2 lines
  • ■ ■ ■ ■ ■
    docs/docs/schema/postgres/custom-functions.mdx
    skipped 7 lines
    8 8   - postgres
    9 9   - schema
    10 10   - sql functions
    11  - - stored procedures
    12 11  ---
    13 12   
    14 13  import GraphiQLIDE from '@site/src/components/GraphiQLIDE';
    skipped 6 lines
    21 20  ## What are Custom functions?
    22 21   
    23 22  Postgres [user-defined SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html) can be used to
    24  -either encapsulate some custom business logic or extend the built-in SQL functions and operators. SQL functions are also
    25  -referred to as **stored procedures**.
     23 +either encapsulate some custom business logic or extend the built-in SQL functions and operators.
    26 24   
    27 25  Hasura GraphQL Engine lets you expose certain types of user-defined functions as top level fields in the GraphQL API to
    28 26  allow querying them with either `queries` or `subscriptions`, or for `VOLATILE` functions as `mutations`. These are
    skipped 555 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/snowflake/native-queries.mdx
    skipped 307 lines
    308 308  :::info Permissions and Logical Models
    309 309   
    310 310  Note that this Logical Model has no attached permissions and therefore will only be available to the admin role. See the
    311  -[Logical Model documentation](/schema/ms-sql-server/logical-models.mdx) for information on attaching permissions.
     311 +[Logical Model documentation](/schema/snowflake/logical-models.mdx) for information on attaching permissions.
    312 312   
    313 313  :::
    314 314   
    skipped 182 lines
    497 497   
    498 498  When making a query, the arguments are specified using the `args` parameter of the query root field.
    499 499   
     500 +##### Example: `LIKE` operator
     501 + 
     502 +A commonly used operator is the `LIKE`. When used in a `WHERE` condition, it's usually written with this syntax
     503 +`WHERE Title LIKE '%word%'`.
     504 + 
     505 +In order to use it with Native Query arguments, you need to use this syntax `LIKE ('%' || {{searchTitle}} || '%')`,
     506 +where `searchTitle` is the Native Query parameter.
     507 + 
    500 508  ## Query functionality
    501 509   
    502 510  Just like tables, Native Queries generate GraphQL types with the ability to further break down the data. You can find
    skipped 12 lines
    515 523  ## Permissions
    516 524   
    517 525  Native queries will inherit the permissions of the Logical Model that they return. See the
    518  -[documentation on Logical Models](/schema/ms-sql-server/logical-models.mdx) for an explanation of how to add
    519  -permissions.
     526 +[documentation on Logical Models](/schema/snowflake/logical-models.mdx) for an explanation of how to add permissions.
    520 527   
    521 528  ## Relationships
    522 529   
    skipped 7 lines
    530 537  Currently relationships are only supported between Native Queries residing in the same source.
    531 538   
    532 539  As an example, consider the following Native Queries which implement the data model of articles and authors given in the
    533  -section on [Logical Model references](/schema/ms-sql-server/logical-models.mdx#referencing-other-logical-models):
     540 +section on [Logical Model references](/schema/snowflake/logical-models.mdx#referencing-other-logical-models):
    534 541   
    535 542  <Tabs groupId="user-preference" className="api-tabs">
    536 543  <TabItem value="api" label="API">
    skipped 157 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/security/dynamic-secrets.mdx
    skipped 92 lines
    93 93  Dynamic secrets can be used in template variables for data connectors. See
    94 94  [Template variables](/databases/database-config/data-connector-config.mdx/#template) for reference.
    95 95   
     96 +## Forcing secret refresh
     97 + 
     98 +If the environment variable `HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL=<url>`
     99 +is set, on each connection failure the server will POST to the specified URL the payload:
     100 + 
     101 +```
     102 +{"filename": <path>}
     103 +```
     104 + 
     105 +It is expected that the responding server will return only after refreshing the
     106 +secret at the given filepath. [hasura-secret-refresh](https://github.com/hasura/hasura-secret-refresh)
     107 +follows this spec.
     108 + 
  • ■ ■ ■ ■ ■
    docs/docusaurus.config.js
    skipped 18 lines
    19 19   projectName: 'graphql-engine',
    20 20   staticDirectories: ['static', 'public'],
    21 21   customFields: {
    22  - docsBotEndpointURL:
    23  - process.env.NODE_ENV === 'development'
    24  - ? 'ws://localhost:8000/hasura-docs-ai'
    25  - : 'wss://website-api.hasura.io/chat-bot/hasura-docs-ai',
     22 + docsBotEndpointURL: (() => {
     23 + console.log('process.env.release_mode docs-bot', process.env.release_mode);
     24 + switch (process.env.release_mode) {
     25 + case 'development':
     26 + return 'ws://localhost:8000/hasura-docs-ai';
     27 + case 'production':
     28 + return 'wss://website-api.hasura.io/chat-bot/hasura-docs-ai';
     29 + case 'staging':
     30 + return 'wss://website-api.stage.hasura.io/chat-bot/hasura-docs-ai';
     31 + default:
     32 + return 'ws://localhost:8000/hasura-docs-ai'; // default to development if no match
     33 + }
     34 + })(),
    26 35   hasuraVersion: 2,
    27 36   DEV_TOKEN: process.env.DEV_TOKEN,
    28 37   },
    skipped 237 lines
  • ■ ■ ■ ■ ■ ■
    docs/src/components/AiChatBot/AiChatBot.tsx
    skipped 3 lines
    4 4  import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
    5 5  import { CloseIcon, RespondingIconGray, SparklesIcon } from '@site/src/components/AiChatBot/icons';
    6 6  import { useLocalStorage } from 'usehooks-ts'
    7  -import profilePic from '@site/static/img/hasura-ai-profile-pic.png';
    8  - 
     7 +import profilePic from '@site/static/img/docs-bot-profile-pic.webp';
     8 +import { v4 as uuidv4 } from 'uuid';
    9 9   
    10 10  interface Message {
    11 11   userMessage: string;
    skipped 14 lines
    26 26  const initialMessages: Message[] = [
    27 27   {
    28 28   userMessage: '',
    29  - botResponse: "Hi! I'm HasuraAI, the docs chatbot.",
     29 + botResponse: "Hi! I'm DocsBot, the Hasura docs AI chatbot.",
    30 30   },
    31 31   {
    32 32   userMessage: '',
    skipped 17 lines
    50 50   const [isResponding, setIsResponding] = useState<boolean>(false)
    51 51   // Manage the text input
    52 52   const [input, setInput] = useState<string>('');
     53 + // Manage the message thread ID
     54 + const [messageThreadId, setMessageThreadId] = useLocalStorage<String>(`hasuraV${customFields.hasuraVersion}ThreadId`, uuidv4())
    53 55   // Manage the historical messages
    54 56   const [messages, setMessages] = useLocalStorage<Message[]>(`hasuraV${customFields.hasuraVersion}BotMessages`, initialMessages);
    55 57   // Manage the current message
    skipped 129 lines
    185 187   }
    186 188   
    187 189   if (ws) {
    188  - const toSend = JSON.stringify({ previousMessages: messages, currentUserInput: input });
     190 + const toSend = JSON.stringify({ previousMessages: messages, currentUserInput: input, messageThreadId });
    189 191   setCurrentMessage({ userMessage: input, botResponse: '' });
    190 192   setInput('');
    191 193   ws.send(toSend);
    skipped 1 lines
    193 195   }
    194 196   
    195 197   };
     198 + 
     199 + const baseUrl = useDocusaurusContext().siteConfig.baseUrl;
    196 200   
    197 201   return (
    198 202   <div className="chat-popup">
    skipped 10 lines
    209 213   <div className="chat-window">
    210 214   <div className="info-bar">
    211 215   <div className={"bot-name-pic-container"}>
    212  - <div className="bot-name">HasuraAI</div>
     216 + <div className="bot-name">DocsBot</div>
    213 217   <img src={profilePic} height={30} width={30} className="bot-pic"/>
    214 218   </div>
    215 219   <button className="clear-button" onClick={() => {
    216 220   setMessages(initialMessages)
    217 221   setCurrentMessage({ userMessage: '', botResponse: '' });
     222 + setMessageThreadId(uuidv4());
    218 223   }}>Clear</button>
    219 224   </div>
    220 225   <div className="messages-container" onScroll={handleScroll} ref={scrollDiv}>
    skipped 58 lines
  • ■ ■ ■ ■
    docs/src/components/BannerDismissable/DDNBanner.tsx
    skipped 8 lines
    9 9   return (
    10 10   <div className="banner">
    11 11   <div>
    12  - Hasura DDN is the future of data delivery.&nbsp;<a href="https://hasura.io/docs/3.0/index">Click here for the Hasura DDN docs</a>.
     12 + Hasura DDN is the future of data delivery.&nbsp;<a href="https://hasura.io/docs/3.0/index">Click here for the Hasura DDN docs</a>.
    13 13   </div>
    14 14   <button className="close-btn" onClick={() => setIsVisible(false)}>
    15 15   <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
    skipped 6 lines
  • ■ ■ ■ ■ ■ ■
    docs/src/components/HasuraConBanner/index.tsx
    skipped 6 lines
    7 7  import styles from './styles.module.scss';
    8 8   
    9 9  const HasuraConBanner = props => {
    10  - const isSnowFlakeSection = props.location.pathname.startsWith(`/docs/latest/databases/snowflake`);
     10 + // const isSnowFlakeSection = props.location.pathname.startsWith(`/docs/latest/databases/snowflake`);
    11 11   
    12  - const isObservabilitySection = props.location.pathname.startsWith(`/docs/latest/observability`);
     12 + // const isObservabilitySection = props.location.pathname.startsWith(`/docs/latest/observability`);
    13 13   
    14  - const isSecuritySection = props.location.pathname.startsWith(`/docs/latest/security`);
     14 + // const isSecuritySection = props.location.pathname.startsWith(`/docs/latest/security`);
    15 15   
    16  - const isMySQLSection = props.location.pathname.startsWith(`/docs/latest/databases/mysql`);
     16 + // const isMySQLSection = props.location.pathname.startsWith(`/docs/latest/databases/mysql`);
    17 17   
    18  - const isOracleSection = props.location.pathname.startsWith(`/docs/latest/databases/oracle`);
     18 + // const isOracleSection = props.location.pathname.startsWith(`/docs/latest/databases/oracle`);
    19 19   
    20  - const isMariaDBSection = props.location.pathname.startsWith(`/docs/latest/databases/mariadb`);
     20 + // const isMariaDBSection = props.location.pathname.startsWith(`/docs/latest/databases/mariadb`);
    21 21   
    22 22   // Banner for - New product launch webinar */
    23  - if (isMySQLSection || isOracleSection || isMariaDBSection) {
    24  - return (
    25  - <div className={styles['product-launch-webinar-bg']}>
    26  - <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/product-launch/">
    27  - <div className={styles['hasura-con-brand']}>
    28  - <img
    29  - className={styles['brand-light']}
    30  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1683628053/main-web/Group_11457_vceb9f.png"
    31  - alt="hasura-webinar"
    32  - />
    33  - </div>
    34  - <div className={styles['content-div']}>
    35  - <h3>Ship faster with low-code APIs on MySQL, MariaDB, and Oracle</h3>
    36  - <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    37  - View Recording
    38  - <ArrowRight />
    39  - </div>
    40  - </div>
    41  - </a>
    42  - </div>
    43  - );
    44  - }
     23 + // if (isMySQLSection || isOracleSection || isMariaDBSection) {
     24 + // return (
     25 + // <div className={styles['product-launch-webinar-bg']}>
     26 + // <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/product-launch/">
     27 + // <div className={styles['hasura-con-brand']}>
     28 + // <img
     29 + // className={styles['brand-light']}
     30 + // src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1683628053/main-web/Group_11457_vceb9f.png"
     31 + // alt="hasura-webinar"
     32 + // />
     33 + // </div>
     34 + // <div className={styles['content-div']}>
     35 + // <h3>Ship faster with low-code APIs on MySQL, MariaDB, and Oracle</h3>
     36 + // <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
     37 + // View Recording
     38 + // <ArrowRight />
     39 + // </div>
     40 + // </div>
     41 + // </a>
     42 + // </div>
     43 + // );
     44 + // }
    45 45   
    46  - if (isSnowFlakeSection) {
    47  - return (
    48  - <div className={styles['snowflake-bg']}>
    49  - <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
    50  - <div className={styles['hasura-con-brand']}>
    51  - <img
    52  - className={styles['brand-light']}
    53  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
    54  - alt="Hasura Con"
    55  - />
    56  - </div>
    57  - <div className={styles['content-div']}>
    58  - <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
    59  - <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    60  - View Recording
    61  - <ArrowRight />
    62  - </div>
    63  - </div>
    64  - </a>
    65  - </div>
    66  - );
    67  - }
     46 + // if (isSnowFlakeSection) {
     47 + // return (
     48 + // <div className={styles['snowflake-bg']}>
     49 + // <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
     50 + // <div className={styles['hasura-con-brand']}>
     51 + // <img
     52 + // className={styles['brand-light']}
     53 + // src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
     54 + // alt="Hasura Con"
     55 + // />
     56 + // </div>
     57 + // <div className={styles['content-div']}>
     58 + // <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
     59 + // <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
     60 + // View Recording
     61 + // <ArrowRight />
     62 + // </div>
     63 + // </div>
     64 + // </a>
     65 + // </div>
     66 + // );
     67 + // }
    68 68   
    69  - if (isSnowFlakeSection) {
    70  - return (
    71  - <div className={styles['snowflake-bg']}>
    72  - <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
    73  - <div className={styles['hasura-con-brand']}>
    74  - <img
    75  - className={styles['brand-light']}
    76  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
    77  - alt="Hasura Con"
    78  - />
    79  - </div>
    80  - <div className={styles['content-div']}>
    81  - <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
    82  - <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    83  - View Recording
    84  - <ArrowRight />
    85  - </div>
    86  - </div>
    87  - </a>
    88  - </div>
    89  - );
    90  - }
     69 + // if (isSnowFlakeSection) {
     70 + // return (
     71 + // <div className={styles['snowflake-bg']}>
     72 + // <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/snowflake-and-postgresql/">
     73 + // <div className={styles['hasura-con-brand']}>
     74 + // <img
     75 + // className={styles['brand-light']}
     76 + // src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677756408/main-web/Group_11455_1_ziz1fz.png"
     77 + // alt="Hasura Con"
     78 + // />
     79 + // </div>
     80 + // <div className={styles['content-div']}>
     81 + // <h3>Combining Snowflake and PostgreSQL to build low-latency apps on historical data insights</h3>
     82 + // <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
     83 + // View Recording
     84 + // <ArrowRight />
     85 + // </div>
     86 + // </div>
     87 + // </a>
     88 + // </div>
     89 + // );
     90 + // }
    91 91   
    92  - if (isObservabilitySection) {
    93  - return (
    94  - <div className={styles['observe-bg']}>
    95  - <a
    96  - className={styles['webinar-banner']}
    97  - href="https://hasura.io/events/webinar/best-practices-for-api-observability-with-hasura/"
    98  - >
    99  - <div className={styles['hasura-con-brand']}>
    100  - <img
    101  - className={styles['brand-light']}
    102  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759444/main-web/Group_11455_2_rdpykm.png"
    103  - alt="Hasura Con"
    104  - />
    105  - </div>
    106  - <div className={styles['content-div']}>
    107  - <h3>Best Practices for API Observability with Hasura</h3>
    108  - <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    109  - View Recording
    110  - <ArrowRight />
    111  - </div>
    112  - </div>
    113  - </a>
    114  - </div>
    115  - );
    116  - }
     92 + // if (isObservabilitySection) {
     93 + // return (
     94 + // <div className={styles['observe-bg']}>
     95 + // <a
     96 + // className={styles['webinar-banner']}
     97 + // href="https://hasura.io/events/webinar/best-practices-for-api-observability-with-hasura/"
     98 + // >
     99 + // <div className={styles['hasura-con-brand']}>
     100 + // <img
     101 + // className={styles['brand-light']}
     102 + // src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759444/main-web/Group_11455_2_rdpykm.png"
     103 + // alt="Hasura Con"
     104 + // />
     105 + // </div>
     106 + // <div className={styles['content-div']}>
     107 + // <h3>Best Practices for API Observability with Hasura</h3>
     108 + // <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
     109 + // View Recording
     110 + // <ArrowRight />
     111 + // </div>
     112 + // </div>
     113 + // </a>
     114 + // </div>
     115 + // );
     116 + // }
    117 117   
    118  - if (isSecuritySection) {
    119  - return (
    120  - <div className={styles['security-bg']}>
    121  - <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/securing-your-api-with-hasura/">
    122  - <div className={styles['hasura-con-brand']}>
    123  - <img
    124  - className={styles['brand-light']}
    125  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759811/main-web/Group_11455_3_azgk7w.png"
    126  - alt="Hasura Con"
    127  - />
    128  - </div>
    129  - <div className={styles['content-div']}>
    130  - <h3>Securing your API with Hasura</h3>
    131  - <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    132  - View Recording
    133  - <ArrowRight />
    134  - </div>
    135  - </div>
    136  - </a>
    137  - </div>
    138  - );
    139  - }
     118 + // if (isSecuritySection) {
     119 + // return (
     120 + // <div className={styles['security-bg']}>
     121 + // <a className={styles['webinar-banner']} href="https://hasura.io/events/webinar/securing-your-api-with-hasura/">
     122 + // <div className={styles['hasura-con-brand']}>
     123 + // <img
     124 + // className={styles['brand-light']}
     125 + // src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1677759811/main-web/Group_11455_3_azgk7w.png"
     126 + // alt="Hasura Con"
     127 + // />
     128 + // </div>
     129 + // <div className={styles['content-div']}>
     130 + // <h3>Securing your API with Hasura</h3>
     131 + // <div className={styles['hasura-con-register'] + ' ' + styles['hasura-con-register-mobile-hide']}>
     132 + // View Recording
     133 + // <ArrowRight />
     134 + // </div>
     135 + // </div>
     136 + // </a>
     137 + // </div>
     138 + // );
     139 + // }
    140 140   
    141 141   return (
    142  - <a className={styles['hasura-con-banner']} href="https://hasura.io/events/hasura-con-2023/">
     142 + <a className={styles['hasura-con-banner']} href="https://hasura.io/events/hasura-con-2024">
    143 143   <div className={styles['hasura-con-brand']}>
    144  - <img
    145  - className={styles['hasuracon23-img']}
    146  - src="https://res.cloudinary.com/dh8fp23nd/image/upload/v1686154570/hasura-con-2023/has-con-light-date_r2a2ud.png"
    147  - alt="Hasura Con"
    148  - />
     144 + <svg
     145 + fill="none"
     146 + height="42"
     147 + viewBox="0 0 239 42"
     148 + width="239"
     149 + xmlns="http://www.w3.org/2000/svg"
     150 + >
     151 + <path
     152 + d="m38.0802 14.8938c1.1907-3.5976.5857-10.81721-1.6165-13.50688-.2856-.35146-.8325-.30753-1.0793.07322l-2.7976 4.31519c-.6921.85913-1.9166 1.0495-2.8265.42956-2.9572-2.00138-6.505-3.18757-10.3334-3.23639-3.8284-.04881-7.4052 1.05927-10.40597 2.98744-.92444.59553-2.14896.38075-2.81687-.49791l-2.69588-4.38352c-.23716-.385636-.78408-.439332-1.07932-.097632-2.265111 2.635972-3.03467 9.840922-1.931152 13.467822.367839 1.2009.459799 2.4749.2178 3.7099-.23716 1.2204-.479159 2.6995-.493679 3.7295-.121 10.5731 8.276381 19.2426 18.754971 19.3646 10.4834.122 19.0792-8.3473 19.2002-18.9156.0097-1.0299-.1936-2.5139-.4065-3.7391-.213-1.2399-.092-2.5091.3049-3.7002z"
     153 + fill="#3970fd"
     154 + />
     155 + <g fill="#fff">
     156 + <path d="m20.1496 13.6664 1.6087 4.6515c.0826.2432.3146.4088.5707.403l4.4542-.02c.589-.0015.8323.7504.3531 1.0931l-3.6412 2.5884c-.2098.1493-.303.414-.2266.6653l1.4019 4.6901c.1627.5462-.4578.9959-.9219.6646l-3.8586-2.7106c-.2079-.147-.4819-.1446-.6899-.0042l-3.8737 2.687c-.4712.3256-1.0886-.1234-.9175-.6697l1.4334-4.6816c.0754-.245-.0162-.5131-.2232-.6646l-3.6225-2.6137c-.4758-.3428-.2273-1.0965.3599-1.089l4.4545.0496c.2589.004.49-.1597.5743-.4029l1.6381-4.6373c.1881-.5383.952-.5335 1.1352.0027z" />
     157 + <path d="m57.4343 9.99387h4.0106v22.00613h-4.0106v-9.3814h-4.5338v9.3814h-4.0106v-22.00613h4.0106v9.55573h4.5338zm16.0444 22.00613-.837-4.5686h-4.8128l-.7672 4.5686h-4.0107l4.4292-22.00613h5.4056l4.6384 22.00613zm-5.1267-7.6028h3.7317l-1.9182-10.5322zm17.7397 3.5922v-4.7082c0-.372-.0698-.6161-.2093-.7323-.1395-.1395-.3952-.2093-.7672-.2093h-2.8249c-2.3947 0-3.5921-1.1625-3.5921-3.4875v-5.4056c0-2.3018 1.2555-3.45263 3.7665-3.45263h3.8362c2.511 0 3.7665 1.15083 3.7665 3.45263v3.069h-4.0455v-2.511c0-.372-.0697-.6161-.2092-.7324-.1395-.1395-.3953-.2092-.7673-.2092h-1.3252c-.3953 0-.6626.0697-.8021.2092-.1395.1163-.2093.3604-.2093.7324v4.4291c0 .372.0698.6278.2093.7673.1395.1162.4068.1743.8021.1743h2.7551c2.4413 0 3.6619 1.1393 3.6619 3.4178v5.7544c0 2.3017-1.2671 3.4526-3.8014 3.4526h-3.7665c-2.5342 0-3.8014-1.1509-3.8014-3.4526v-3.0342h4.0107v2.4762c0 .372.0697.6277.2092.7672.1395.1163.4069.1744.8021.1744h1.3253c.372 0 .6277-.0581.7672-.1744.1395-.1395.2093-.3952.2093-.7672zm14.4183-17.99553h4.01v18.55353c0 2.3017-1.267 3.4526-3.801 3.4526h-4.255c-2.5342 0-3.8014-1.1509-3.8014-3.4526v-18.55353h4.0107v17.99553c0 .372.0697.6277.2092.7672.1395.1163.3953.1744.7673.1744h1.8483c.3953 0 .6629-.0581.8019-.1744.14-.1395.21-.3952.21-.7672zm10.971 13.42683v8.5793h-4.011v-22.00613h8.091c2.535 0 3.802 1.15083 3.802 3.45263v6.5216c0 1.9065-.849 3.0225-2.546 3.348l3.662 8.6839h-4.325l-3.348-8.5793zm0-10.3578v7.3935h2.895c.372 0 .627-.0582.767-.1744.139-.1395.209-.3953.209-.7673v-5.5102c0-.372-.07-.6161-.209-.7324-.14-.1395-.395-.2092-.767-.2092zm19.659 18.9371-.837-4.5686h-4.813l-.767 4.5686h-4.011l4.429-22.00613h5.406l4.638 22.00613zm-5.127-7.6028h3.732l-1.919-10.5322zm22.273-7.1145h-4.045v-3.4177c0-.372-.07-.6161-.209-.7324-.14-.1395-.396-.2092-.768-.2092h-1.639c-.372 0-.628.0697-.767.2092-.14.1163-.209.3604-.209.7324v14.2987c0 .372.069.6278.209.7673.139.1162.395.1744.767.1744h1.639c.372 0 .628-.0582.768-.1744.139-.1395.209-.3953.209-.7673v-3.348h4.045v3.7665c0 2.3018-1.267 3.4527-3.801 3.4527h-4.08c-2.535 0-3.802-1.1509-3.802-3.4527v-15.1357c0-2.3018 1.267-3.45263 3.802-3.45263h4.08c2.534 0 3.801 1.15083 3.801 3.45263zm6.143-7.28883h4.255c2.534 0 3.801 1.15083 3.801 3.45263v15.1009c0 2.3017-1.267 3.4526-3.801 3.4526h-4.255c-2.534 0-3.801-1.1509-3.801-3.4526v-15.1009c0-2.3018 1.267-3.45263 3.801-3.45263zm4.045 17.99553v-13.9849c0-.372-.069-.6161-.209-.7324-.139-.1395-.395-.2092-.767-.2092h-1.848c-.396 0-.663.0697-.803.2092-.139.1163-.209.3604-.209.7324v13.9849c0 .372.07.6277.209.7672.14.1163.407.1744.803.1744h1.848c.372 0 .628-.0581.767-.1744.14-.1395.209-.3952.209-.7672zm15.405-17.99553h3.662v22.00613h-3.766l-4.709-14.5778v14.5778h-3.696v-22.00613h3.871l4.638 14.36853zm15.151 4.01063v3.2782h-4.011v-3.8362c0-2.3018 1.267-3.45263 3.801-3.45263h3.348c2.558 0 3.837 1.15083 3.837 3.45263v2.4761c0 1.7205-.524 3.3945-1.57 5.022l-4.568 7.9864h6.242v3.069h-10.985v-2.8946l5.684-9.207c.814-1.2323 1.221-2.5808 1.221-4.0455v-1.8484c0-.372-.07-.6161-.209-.7324-.14-.1395-.396-.2092-.768-.2092h-1.011c-.395 0-.663.0697-.802.2092-.14.1163-.209.3604-.209.7324zm16.849 13.9849v-13.9849c0-.372-.07-.6161-.209-.7324-.14-.1395-.407-.2092-.802-.2092h-1.604c-.372 0-.628.0697-.768.2092-.139.1163-.209.3604-.209.7324v13.9849c0 .372.07.6277.209.7672.14.1163.396.1744.768.1744h1.604c.395 0 .662-.0581.802-.1744.139-.1395.209-.3952.209-.7672zm4.011-14.5429v15.1009c0 2.3017-1.267 3.4526-3.802 3.4526h-4.045c-2.534 0-3.801-1.1509-3.801-3.4526v-15.1009c0-2.3018 1.267-3.45263 3.801-3.45263h4.045c2.535 0 3.802 1.15083 3.802 3.45263zm6.063.558v3.2782h-4.011v-3.8362c0-2.3018 1.267-3.45263 3.802-3.45263h3.348c2.557 0 3.836 1.15083 3.836 3.45263v2.4761c0 1.7205-.523 3.3945-1.57 5.022l-4.568 7.9864h6.242v3.069h-10.985v-2.8946l5.684-9.207c.814-1.2323 1.221-2.5808 1.221-4.0455v-1.8484c0-.372-.07-.6161-.209-.7324-.14-.1395-.395-.2092-.767-.2092h-1.012c-.395 0-.662.0697-.802.2092-.139.1163-.209.3604-.209.7324zm21.802 10.6369v3.1038h-2.093v4.2548h-3.801v-4.2548h-7.987v-3.4177l6.906-14.33363h4.01l-7.254 14.64753h4.325v-4.2548h3.801v4.2548z" />
     158 + </g>
     159 + </svg>
    149 160   </div>
    150 161   <div className={styles['hasura-con-space-between']}>
    151 162   <div>
    152  - <div className={styles['hasura-con-23-title']}>The fourth annual Hasura User Conference</div>
     163 + <div className={styles['hasura-con-23-title']}>The HasuraCon 2024 CFP is open!</div>
    153 164   </div>
    154 165   <div className={styles['hasura-con-register-button'] + ' ' + styles['hasura-con-register-mobile-hide']}>
    155 166   Read more
    skipped 12 lines
  • ■ ■ ■ ■ ■ ■
    docs/src/components/HasuraConBanner/styles.module.scss
    skipped 70 lines
    71 71   font-size: var(--ifm-small-font-size);
    72 72   font-weight: var(--ifm-font-weight-semibold);
    73 73   align-self: center;
     74 + display: grid;
    74 75   img {
    75 76   width: 97px;
    76 77   }
    77  - 
     78 + svg {
     79 + width: 170px;
     80 + }
    78 81   .hasuracon23-img {
    79 82   min-width: 159px;
    80 83   // margin-right: 42px;
    skipped 135 lines
    216 219  @media (min-width: 997px) and (max-width: 1380px) {
    217 220   .hasura-con-banner {
    218 221   grid-template-columns: 1fr;
    219  - grid-gap: 20px;
     222 + grid-gap: 20px !important;
    220 223   .hasura-con-register-button {
    221 224   margin-top: 20px;
    222 225   }
    skipped 144 lines
  • docs/static/img/docs-bot-profile-pic.webp
  • docs/static/img/hasura-ai-profile-pic.png
  • ■ ■ ■ ■ ■ ■
    flake.lock
    skipped 4 lines
    5 5   "systems": "systems"
    6 6   },
    7 7   "locked": {
    8  - "lastModified": 1694529238,
    9  - "narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
     8 + "lastModified": 1710146030,
     9 + "narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
    10 10   "owner": "numtide",
    11 11   "repo": "flake-utils",
    12  - "rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
     12 + "rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
    13 13   "type": "github"
    14 14   },
    15 15   "original": {
    skipped 4 lines
    20 20   },
    21 21   "nixpkgs": {
    22 22   "locked": {
    23  - "lastModified": 1699914561,
    24  - "narHash": "sha256-b296O45c3Jgj8GEFg/NN7ZOJjBBCHr1o2iA4yoJ3OKE=",
     23 + "lastModified": 1710754590,
     24 + "narHash": "sha256-9LA94zYvr5a6NawEftuSdTP8HYMV0ZYdB5WG6S9Z7tI=",
    25 25   "owner": "NixOS",
    26 26   "repo": "nixpkgs",
    27  - "rev": "2f8742189e9ef86961ab90a30c68eb844565578a",
     27 + "rev": "a089e2dc4cf2421ca29f2d5ced81badd5911fcdf",
    28 28   "type": "github"
    29 29   },
    30 30   "original": {
    skipped 31 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/Data/TableRelationships/autoRelations.js
    skipped 68 lines
    69 69   currRCol = Object.values(arrRelDef.manual_configuration.column_mapping);
    70 70   }
    71 71   
    72  - if (currTable.name === relTable && sameRelCols(currRCol, relCols)) {
     72 + if (currTable?.name === relTable && sameRelCols(currRCol, relCols)) {
    73 73   _isExistingArrRel = true;
    74 74   break;
    75 75   }
    skipped 137 lines
  • ■ ■ ■ ■
    frontend/package.json
    skipped 84 lines
    85 85   "dom-parser": "0.1.6",
    86 86   "form-urlencoded": "^6.1.0",
    87 87   "format-graphql": "^1.4.0",
    88  - "graphiql": "1.4.7",
     88 + "graphiql": "1.0.0-alpha.0",
    89 89   "graphiql-code-exporter": "2.0.8",
    90 90   "graphiql-explorer": "0.6.2",
    91 91   "graphql": "14.5.8",
    skipped 302 lines
  • frontend/yarn.lock
    Unable to diff as the file is too large.
  • ■ ■ ■ ■
    install-manifests/azure-container/azuredeploy.json
    skipped 54 lines
    55 55   "dbName": "[parameters('postgresDatabaseName')]",
    56 56   "containerGroupName": "[concat(parameters('name'), '-container-group')]",
    57 57   "containerName": "hasura-graphql-engine",
    58  - "containerImage": "hasura/graphql-engine:v2.37.0"
     58 + "containerImage": "hasura/graphql-engine:v2.38.0"
    59 59   },
    60 60   "resources": [
    61 61   {
    skipped 70 lines
  • ■ ■ ■ ■
    install-manifests/azure-container-with-pg/azuredeploy.json
    skipped 97 lines
    98 98   "firewallRuleName": "allow-all-azure-firewall-rule",
    99 99   "containerGroupName": "[concat(parameters('name'), '-container-group')]",
    100 100   "containerName": "hasura-graphql-engine",
    101  - "containerImage": "hasura/graphql-engine:v2.37.0"
     101 + "containerImage": "hasura/graphql-engine:v2.38.0"
    102 102   },
    103 103   "resources": [
    104 104   {
    skipped 122 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/docker-compose/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.37.0
     11 + image: hasura/graphql-engine:v2.38.0
    12 12   ports:
    13 13   - "8080:8080"
    14 14   restart: always
    skipped 16 lines
    31 31   data-connector-agent:
    32 32   condition: service_healthy
    33 33   data-connector-agent:
    34  - image: hasura/graphql-data-connector:v2.37.0
     34 + image: hasura/graphql-data-connector:v2.38.0
    35 35   restart: always
    36 36   ports:
    37 37   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-cockroach/docker-compose.yaml
    skipped 26 lines
    27 27   - "${PWD}/cockroach-data:/cockroach/cockroach-data"
    28 28   
    29 29   graphql-engine:
    30  - image: hasura/graphql-engine:v2.37.0
     30 + image: hasura/graphql-engine:v2.38.0
    31 31   ports:
    32 32   - "8080:8080"
    33 33   depends_on:
    skipped 19 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-https/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.37.0
     11 + image: hasura/graphql-engine:v2.38.0
    12 12   depends_on:
    13 13   - "postgres"
    14 14   restart: always
    skipped 29 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-ms-sql-server/docker-compose.yaml
    skipped 14 lines
    15 15   volumes:
    16 16   - mssql_data:/var/opt/mssql
    17 17   graphql-engine:
    18  - image: hasura/graphql-engine:v2.37.0
     18 + image: hasura/graphql-engine:v2.38.0
    19 19   ports:
    20 20   - "8080:8080"
    21 21   depends_on:
    skipped 21 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-pgadmin/docker-compose.yaml
    skipped 18 lines
    19 19   PGADMIN_DEFAULT_EMAIL: [email protected]
    20 20   PGADMIN_DEFAULT_PASSWORD: admin
    21 21   graphql-engine:
    22  - image: hasura/graphql-engine:v2.37.0
     22 + image: hasura/graphql-engine:v2.38.0
    23 23   ports:
    24 24   - "8080:8080"
    25 25   depends_on:
    skipped 17 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-postgis/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.37.0
     11 + image: hasura/graphql-engine:v2.38.0
    12 12   ports:
    13 13   - "8080:8080"
    14 14   depends_on:
    skipped 16 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-yugabyte/docker-compose.yaml
    skipped 22 lines
    23 23   - yugabyte-data:/var/lib/postgresql/data
    24 24   
    25 25   graphql-engine:
    26  - image: hasura/graphql-engine:v2.37.0
     26 + image: hasura/graphql-engine:v2.38.0
    27 27   ports:
    28 28   - "8080:8080"
    29 29   depends_on:
    skipped 22 lines
  • ■ ■ ■ ■
    install-manifests/docker-run/docker-run.sh
    skipped 2 lines
    3 3   -e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password@hostname:port/dbname \
    4 4   -e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
    5 5   -e HASURA_GRAPHQL_DEV_MODE=true \
    6  - hasura/graphql-engine:v2.37.0
     6 + hasura/graphql-engine:v2.38.0
    7 7   
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/athena/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/enterprise/aws-ecs/hasura-fargate-task.json
    skipped 3 lines
    4 4   "containerDefinitions": [
    5 5   {
    6 6   "name": "hasura",
    7  - "image": "hasura/graphql-engine:v2.37.0",
     7 + "image": "hasura/graphql-engine:v2.38.0",
    8 8   "portMappings": [
    9 9   {
    10 10   "hostPort": 8080,
    skipped 67 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/clickhouse/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/clickhouse-data-connector:v2.37.0
     51 + image: hasura/clickhouse-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8080:8081
    skipped 9 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/docker-compose/docker-compose.yaml
    skipped 14 lines
    15 15   environment:
    16 16   POSTGRES_PASSWORD: postgrespassword
    17 17   graphql-engine:
    18  - image: hasura/graphql-engine:v2.37.0
     18 + image: hasura/graphql-engine:v2.38.0
    19 19   ports:
    20 20   - "8080:8080"
    21 21   restart: always
    skipped 25 lines
    47 47   data-connector-agent:
    48 48   condition: service_healthy
    49 49   data-connector-agent:
    50  - image: hasura/graphql-data-connector:v2.37.0
     50 + image: hasura/graphql-data-connector:v2.38.0
    51 51   restart: always
    52 52   ports:
    53 53   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/enterprise/kubernetes/deployment.yaml
    skipped 17 lines
    18 18   fsGroup: 1001
    19 19   runAsUser: 1001
    20 20   containers:
    21  - - image: hasura/graphql-engine:v2.37.0
     21 + - image: hasura/graphql-engine:v2.38.0
    22 22   imagePullPolicy: IfNotPresent
    23 23   name: hasura
    24 24   readinessProbe:
    skipped 80 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mariadb/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 19 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mongodb/docker-compose.yaml
    skipped 29 lines
    30 30   MONGO_INITDB_ROOT_USERNAME: mongouser
    31 31   MONGO_INITDB_ROOT_PASSWORD: mongopassword
    32 32   hasura:
    33  - image: hasura/graphql-engine:v2.37.0
     33 + image: hasura/graphql-engine:v2.38.0
    34 34   restart: always
    35 35   ports:
    36 36   - 8080:8080
    skipped 23 lines
    60 60   postgres:
    61 61   condition: service_healthy
    62 62   mongo-data-connector:
    63  - image: hasura/mongo-data-connector:v2.37.0
     63 + image: hasura/mongo-data-connector:v2.38.0
    64 64   ports:
    65 65   - 3000:3000
    66 66  volumes:
    skipped 3 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mysql/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 22 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/oracle/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 21 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/redshift/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/snowflake/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.37.0
     16 + image: hasura/graphql-engine:v2.38.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.37.0
     51 + image: hasura/graphql-data-connector:v2.38.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/google-cloud-k8s-sql/deployment.yaml
    skipped 15 lines
    16 16   spec:
    17 17   containers:
    18 18   - name: graphql-engine
    19  - image: hasura/graphql-engine:v2.37.0
     19 + image: hasura/graphql-engine:v2.38.0
    20 20   ports:
    21 21   - containerPort: 8080
    22 22   readinessProbe:
    skipped 60 lines
  • ■ ■ ■ ■
    install-manifests/kubernetes/deployment.yaml
    skipped 17 lines
    18 18   app: hasura
    19 19   spec:
    20 20   containers:
    21  - - image: hasura/graphql-engine:v2.37.0
     21 + - image: hasura/graphql-engine:v2.38.0
    22 22   imagePullPolicy: IfNotPresent
    23 23   name: hasura
    24 24   env:
    skipped 22 lines
  • ■ ■ ■ ■ ■
    nix/overlays/graphql-parser.nix
    skipped 4 lines
    5 5   overrides = prev.lib.composeExtensions
    6 6   (old.overrides or (_: _: { }))
    7 7   (hfinal: hprev: {
    8  - graphql-parser = (final.haskell.packages.${prev.ghcName}.callCabal2nix "graphql-parser" ../../server/lib/graphql-parser-hs { }).overrideScope (
    9  - final: prev: {
    10  - hedgehog = final.hedgehog_1_2;
    11  - }
    12  - );
     8 + graphql-parser = (final.haskell.packages.${prev.ghcName}.callCabal2nix "graphql-parser" ../../server/lib/graphql-parser { });
    13 9   });
    14 10   });
    15 11   };
    skipped 3 lines
  • ■ ■ ■ ■ ■ ■
    nix/shell.nix
    skipped 77 lines
    78 78   pkgs.jq
    79 79   ];
    80 80   
    81  - consoleInputs = [
    82  - pkgs.google-cloud-sdk
    83  - pkgs."nodejs-${versions.nodejsVersion}_x"
    84  - pkgs."nodejs-${versions.nodejsVersion}_x".pkgs.typescript-language-server
    85  - ];
    86  - 
    87 81   docsInputs = [
    88 82   pkgs.yarn
    89 83   ];
    90 84   
    91 85   integrationTestInputs = [
     86 + pkgs.nodejs
    92 87   pkgs.python3
    93 88   pkgs.pyright # Python type checker
    94 89   ];
    skipped 6 lines
    101 96   hls
    102 97   
    103 98   pkgs.haskell.packages.${pkgs.ghcName}.alex
    104  - # pkgs.haskell.packages.${pkgs.ghcName}.apply-refact
     99 + pkgs.haskell.packages.${pkgs.ghcName}.apply-refact
    105 100   (versions.ensureVersion pkgs.haskell.packages.${pkgs.ghcName}.cabal-install)
    106 101   (pkgs.haskell.lib.dontCheck (pkgs.haskell.packages.${pkgs.ghcName}.ghcid))
    107 102   pkgs.haskell.packages.${pkgs.ghcName}.happy
    skipped 55 lines
    163 158   ++ integrationTestInputs;
    164 159  in
    165 160  pkgs.mkShell ({
    166  - buildInputs = baseInputs ++ consoleInputs ++ docsInputs ++ serverDeps ++ devInputs ++ ciInputs;
     161 + buildInputs = baseInputs ++ docsInputs ++ serverDeps ++ devInputs ++ ciInputs;
    167 162  } // pkgs.lib.optionalAttrs pkgs.stdenv.isDarwin {
    168 163   shellHook = ''
    169 164   export DYLD_LIBRARY_PATH='${dynamicLibraryPath}'
    skipped 3 lines
  • ■ ■ ■ ■ ■ ■
    nix/versions.nix
    skipped 10 lines
    11 11   else throw "Invalid version for package ${package.pname}: expected ${expected}, got ${package.version}";
    12 12   
    13 13   ghcVersion = pkgs.lib.strings.fileContents ../.ghcversion;
    14  - 
    15  - nodejsVersion = pkgs.lib.strings.fileContents ../.nvmrc;
    16 14  }
    17 15   
  • ■ ■ ■ ■ ■ ■
    packaging/graphql-engine-base/ubuntu.dockerfile
    1  -# DATE VERSION: 2024-01-23
     1 +# DATE VERSION: 2024-03-13
    2 2  # Modify the above date version (YYYY-MM-DD) if you want to rebuild the image
    3 3   
    4  -FROM ubuntu:jammy-20240111
     4 +FROM ubuntu:jammy-20240227
    5 5   
    6 6  ### NOTE! Shared libraries here need to be kept in sync with `server-builder.dockerfile`!
    7 7   
    skipped 47 lines
  • ■ ■ ■ ■
    server/VERSIONS.json
    1 1  {
    2  - "cabal-install": "3.10.1.0",
     2 + "cabal-install": "3.10.2.1",
    3 3   "ghc": "9.6.4",
    4 4   "hlint": "3.6.1",
    5 5   "ormolu": "0.7.2.0"
    skipped 2 lines
  • ■ ■ ■ ■
    server/graphql-engine.cabal
    skipped 423 lines
    424 424   
    425 425   -- logging related
    426 426   , base64-bytestring >= 1.0
    427  - , auto-update
    428 427   
    429 428   -- regex related
    430 429   , regex-tdfa >=1.3.1 && <1.4
    skipped 231 lines
    662 661   -- Exposed for benchmark:
    663 662   , Hasura.Cache.Bounded
    664 663   , Hasura.CredentialCache
     664 + , Hasura.CachedTime
    665 665   , Hasura.Logging
    666 666   , Hasura.HTTP
    667 667   , Hasura.PingSources
    skipped 638 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/pg-client.cabal
    skipped 53 lines
    54 54   Database.PG.Query.Pool
    55 55   Database.PG.Query.PTI
    56 56   Database.PG.Query.Transaction
     57 + Database.PG.Query.URL
    57 58   
    58 59   build-depends:
    59 60   , aeson
    skipped 5 lines
    65 66   , ekg-prometheus
    66 67   , hashable
    67 68   , hashtables
     69 + -- for our HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL hook
     70 + , http-client
     71 + , http-types
    68 72   , mmorph
    69 73   , monad-control
    70 74   , mtl
    skipped 23 lines
    94 98   Interrupt
    95 99   Timeout
    96 100   Jsonb
     101 + URL
    97 102   
    98 103   build-depends:
     104 + , aeson
    99 105   , async
    100 106   , base
    101 107   , bytestring
    102 108   , hspec
     109 + , mtl
    103 110   , pg-client
     111 + , postgresql-libpq
    104 112   , safe-exceptions
    105 113   , time
    106 114   , transformers
    107  - , aeson
    108  - , mtl
    109  - , postgresql-libpq
    110 115   
    111 116  benchmark pg-client-bench
    112 117   import: common-all
    skipped 10 lines
    123 128   , hasql-transaction
    124 129   , pg-client
    125 130   , tasty-bench
    126  - , text
    127 131   , transformers
    128 132   
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/src/Database/PG/Query/Connection.hs
    skipped 47 lines
    48 48   
    49 49  import Control.Concurrent.Interrupt (interruptOnAsyncException)
    50 50  import Control.Exception.Safe (Exception, SomeException (..), catch, throwIO)
     51 +import Control.Monad (unless)
    51 52  import Control.Monad.Except (MonadError (throwError))
    52 53  import Control.Monad.IO.Class (MonadIO (liftIO))
    53 54  import Control.Monad.Trans.Class (lift)
    54 55  import Control.Monad.Trans.Except (ExceptT, runExceptT, withExceptT)
    55 56  import Control.Retry (RetryPolicyM)
    56 57  import Control.Retry qualified as Retry
    57  -import Data.Aeson (ToJSON (toJSON), Value (String), genericToJSON, object, (.=))
     58 +import Data.Aeson (ToJSON (toJSON), Value (String), encode, genericToJSON, object, (.=))
    58 59  import Data.Aeson.Casing (aesonDrop, snakeCase)
    59 60  import Data.Aeson.TH (mkToJSON)
    60 61  import Data.Bool (bool)
    skipped 13 lines
    74 75  import Data.Text.Encoding.Error (lenientDecode)
    75 76  import Data.Time (NominalDiffTime, UTCTime)
    76 77  import Data.Word (Word16, Word32)
     78 +import Database.PG.Query.URL (encodeURLPassword)
    77 79  import Database.PostgreSQL.LibPQ qualified as PQ
    78 80  import Database.PostgreSQL.Simple.Options qualified as Options
    79 81  import GHC.Generics (Generic)
     82 +import Network.HTTP.Client
     83 +import Network.HTTP.Types.Status (statusCode)
     84 +import System.Environment (lookupEnv)
    80 85  import Prelude
    81 86   
    82 87  {-# ANN module ("HLint: ignore Use tshow" :: String) #-}
    skipped 35 lines
    118 123   <> Text.pack path
    119 124   <> ": "
    120 125   <> Text.pack (show e)
    121  - pure $ Text.strip uriDirty
     126 + pure $ encodeURLPassword $ Text.strip uriDirty
    122 127   where
    123 128   -- Text.readFile but explicit, ignoring locale:
    124 129   readFileUtf8 = fmap decodeUtf8 . BS.readFile
    skipped 84 lines
    209 214  pgRetrying ::
    210 215   (MonadIO m) =>
    211 216   Maybe String ->
     217 + -- | An action to perform on error
    212 218   IO () ->
    213 219   PGRetryPolicyM m ->
    214 220   PGLogger ->
    skipped 27 lines
    242 248   IO PQ.Connection
    243 249  initPQConn ci logger = do
    244 250   host <- extractHost (ciDetails ci)
     251 + -- if this is a dynamic connection, we'll signal to refresh the secret (if
     252 + -- configured) during each retry, ensuring we don't make too many connection
     253 + -- attempts with the wrong credentials and risk getting locked out
     254 + resetFn <- do
     255 + mbUrl <- lookupEnv "HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL"
     256 + case (mbUrl, ciDetails ci) of
     257 + (Just url, CDDynamicDatabaseURI path) -> do
     258 + manager <- newManager defaultManagerSettings
     259 + 
     260 + -- Create the request
     261 + let body = encode $ object ["filename" .= path]
     262 + initialRequest <- parseRequest url
     263 + let request =
     264 + initialRequest
     265 + { method = "POST",
     266 + requestBody = RequestBodyLBS body,
     267 + requestHeaders = [("Content-Type", "application/json")]
     268 + }
     269 + 
     270 + -- The action to perform on each retry. This must only return after
     271 + -- the secrets file has been refreshed.
     272 + return $ do
     273 + status <- statusCode . responseStatus <$> httpLbs request manager
     274 + unless (status >= 200 && status < 300) $
     275 + logger $
     276 + PLERetryMsg $
     277 + object
     278 + ["message" .= String "Forcing refresh of secret file at HASURA_SECRETS_BLOCKING_FORCE_REFRESH_URL seems to have failed. Retrying anyway."]
     279 + _ -> pure $ pure ()
     280 + 
    245 281   -- Retry if postgres connection error occurs
    246 282   pgRetrying host resetFn retryP logger $ do
    247 283   -- Initialise the connection
    skipped 4 lines
    252 288   let connOk = s == PQ.ConnectionOk
    253 289   bool (whenConnNotOk conn) (whenConnOk conn) connOk
    254 290   where
    255  - resetFn = return ()
    256 291   retryP = mkPGRetryPolicy $ ciRetries ci
    257 292   
    258 293   whenConnNotOk conn = Left . PGConnErr <$> readConnErr conn
    skipped 430 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/src/Database/PG/Query/Pool.hs
    skipped 14 lines
    15 15   PGPoolStats (..),
    16 16   PGPoolMetrics (..),
    17 17   getInUseConnections,
     18 + getMaxConnections,
    18 19   defaultConnParams,
    19 20   initPGPool,
    20 21   resizePGPool,
    skipped 75 lines
    96 97   
    97 98  getInUseConnections :: PGPool -> IO Int
    98 99  getInUseConnections = RP.getInUseResourceCount . _pool
     100 + 
     101 +getMaxConnections :: PGPool -> IO Int
     102 +getMaxConnections = RP.getMaxResources . _pool
    99 103   
    100 104  data ConnParams = ConnParams
    101 105   { cpStripes :: !Int,
    skipped 270 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/src/Database/PG/Query/URL.hs
     1 +{-# LANGUAGE DerivingStrategies #-}
     2 +{-# LANGUAGE OverloadedStrings #-}
     3 + 
     4 +module Database.PG.Query.URL
     5 + ( encodeURLPassword,
     6 + )
     7 +where
     8 + 
     9 +import Data.Text (Text)
     10 +import Data.Text qualified as Text
     11 +import Data.Text.Encoding (decodeUtf8, encodeUtf8)
     12 +import Network.HTTP.Types.URI (urlEncode)
     13 +import Prelude
     14 + 
     15 +-- | It is possible and common for postgres url's to have passwords with special
     16 +-- characters in them (ex AWS Secrets Manager passwords). Current URI parsing
     17 +-- libraries fail at parsing postgres uri's with special characters. Also note
     18 +-- that encoding the whole URI causes postgres to fail as well. This only
     19 +-- encodes the password when given a url.
     20 +encodeURLPassword :: Text -> Text
     21 +encodeURLPassword url =
     22 + case Text.breakOnEnd "://" url of
     23 + (_, "") -> url
     24 + (scheme, urlWOScheme) -> case Text.breakOnEnd "@" urlWOScheme of
     25 + ("", _) -> url
     26 + (auth, rest) -> case Text.splitOn ":" $ Text.dropEnd 1 auth of
     27 + [user] -> scheme <> user <> "@" <> rest
     28 + (user : pass) -> scheme <> user <> ":" <> encode' pass <> "@" <> rest
     29 + _ -> url
     30 + where
     31 + encode' arg =
     32 + decodeUtf8 $ urlEncode True (encodeUtf8 $ Text.intercalate ":" arg)
     33 + 
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/test/Spec.hs
    skipped 23 lines
    24 24  import System.Environment qualified as Env
    25 25  import Test.Hspec (describe, hspec, it, shouldBe, shouldReturn)
    26 26  import Timeout (specTimeout)
     27 +import URL (specURL)
    27 28  import Prelude
    28 29   
    29 30  -------------------------------------------------------------------------------
    skipped 52 lines
    82 83   specInterrupt
    83 84   specTimeout
    84 85   specJsonb
     86 + specURL
    85 87   
    86 88  mkPool :: IO PGPool
    87 89  mkPool = do
    skipped 86 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/test/URL.hs
     1 +{-# LANGUAGE DerivingStrategies #-}
     2 +{-# LANGUAGE FlexibleInstances #-}
     3 +{-# LANGUAGE OverloadedStrings #-}
     4 +{-# LANGUAGE ScopedTypeVariables #-}
     5 +{-# OPTIONS_GHC -Wno-unused-imports -Wno-orphans -Wno-name-shadowing #-}
     6 + 
     7 +module URL (specURL) where
     8 + 
     9 +import Database.PG.Query.URL
     10 +import Test.Hspec
     11 +import Prelude
     12 + 
     13 +specURL :: Spec
     14 +specURL = do
     15 + describe "Only the password from a postgres url is encoded if if exists" $ do
     16 + it "None Postgres connection urls succeed" $ do
     17 + let url = "jdbc:mysql://localhostok?user=root&password=pass&allowMultiQueries=true"
     18 + url `shouldBe` encodeURLPassword url
     19 + 
     20 + it "Postgres simple urls succeed" $ do
     21 + let url = "postgres://localhost"
     22 + url `shouldBe` encodeURLPassword url
     23 + 
     24 + it "Postgres urls with no username, password, or database succeed" $ do
     25 + let url = "postgres://localhost:5432"
     26 + url `shouldBe` encodeURLPassword url
     27 + 
     28 + it "Postgres urls with no username or password succeed" $ do
     29 + let url = "postgres://localhost:5432/chinook"
     30 + url `shouldBe` encodeURLPassword url
     31 + 
     32 + it "Postgres urls with no password succeed" $ do
     33 + let url = "postgres://user@localhost:5432/chinook"
     34 + url `shouldBe` encodeURLPassword url
     35 + 
     36 + it "Postgres urls with no password but a : succeed" $ do
     37 + let url = "postgres://user:@localhost:5432/chinook"
     38 + url `shouldBe` encodeURLPassword url
     39 + 
     40 + it "Postgres urls with no username succeed" $ do
     41 + let url = "postgres://:pass@localhost:5432/chinook"
     42 + url `shouldBe` encodeURLPassword url
     43 + 
     44 + it "Postgres urls with simple passwords succeed" $ do
     45 + let url = "postgres://user:pass@localhost:5432/chinook"
     46 + url `shouldBe` encodeURLPassword url
     47 + 
     48 + it "Postgres urls with special characters passwords succeed" $ do
     49 + let url = "postgres://user:a[:sdf($#)]@localhost:5432/chinook"
     50 + expected = "postgres://user:a%5B%3Asdf%28%24%23%29%5D@localhost:5432/chinook"
     51 + 
     52 + expected `shouldBe` encodeURLPassword url
     53 + 
     54 + it "Postgres urls with special characters with @ passwords succeed" $ do
     55 + let url = "postgres://user:a@[:sdf($@#@)]@localhost:5432/chinook"
     56 + expected = "postgres://user:a%40%5B%3Asdf%28%24%40%23%40%29%5D@localhost:5432/chinook"
     57 + 
     58 + expected `shouldBe` encodeURLPassword url
     59 + 
  • ■ ■ ■ ■ ■ ■
    server/lib/resource-pool/Data/Pool.hs
    skipped 33 lines
    34 34   createPool,
    35 35   createPool',
    36 36   resizePool,
     37 + getMaxResources,
    37 38   tryTrimLocalPool,
    38 39   tryTrimPool,
    39 40   withResource,
    skipped 190 lines
    230 231   modError "pool " $
    231 232   "invalid maximum resource count " ++ show maxResources'
    232 233   atomically $ writeTVar maxResources maxResources'
     234 + 
     235 +getMaxResources :: Pool a -> IO Int
     236 +getMaxResources Pool {..} = readTVarIO maxResources
    233 237   
    234 238  -- | Attempt to reduce resource allocation below maximum by dropping some unused
    235 239  -- resources
    skipped 286 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/App.hs
    skipped 712 lines
    713 713   
    714 714   buildExtraHttpLogMetadata _ _ = ()
    715 715   
    716  - logHttpError logger loggingSettings userInfoM reqId waiReq req qErr headers _ _ =
     716 + logHttpError logger loggingSettings userInfoM reqId waiReq req qErr qTime cType headers _ _ =
    717 717   unLoggerTracing logger
    718 718   $ mkHttpLog
    719  - $ mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr Nothing Nothing headers
     719 + $ mkHttpErrorLogContext userInfoM loggingSettings reqId waiReq req qErr qTime cType headers
    720 720   
    721 721   logHttpSuccess logger loggingSettings userInfoM reqId waiReq reqBody response compressedResponse qTime cType headers (CommonHttpLogMetadata rb batchQueryOpLogs, ()) _ =
    722 722   unLoggerTracing logger
    skipped 807 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/CachedTime.hs
     1 +-- safety for unsafePerformIO below
     2 +{-# OPTIONS_GHC -fno-cse -fno-full-laziness #-}
     3 + 
     4 +module Hasura.CachedTime (cachedRecentFormattedTimeAndZone) where
     5 + 
     6 +import Control.Concurrent (forkIO, threadDelay)
     7 +import Control.Exception (uninterruptibleMask_)
     8 +import Data.ByteString.Char8 qualified as B8
     9 +import Data.IORef
     10 +import Data.Time.Clock qualified as Time
     11 +import Data.Time.Format
     12 +import Data.Time.LocalTime qualified as Time
     13 +import Hasura.Prelude
     14 +import System.IO.Unsafe
     15 + 
     16 +-- | A fast timestamp source, updated every 1sec, at the whims of the RTS, calling
     17 +-- 'Time.getCurrentTimeZone' and 'Time.getCurrentTime'
     18 +--
     19 +-- We also store an equivalent RFC7231 timestamp for use in the @Date@ HTTP
     20 +-- header, avoiding 6% latency regression from computing it every time.
     21 +-- We use this at call sites to try to avoid warp's code path that uses the
     22 +-- auto-update library to do this same thing.
     23 +--
     24 +-- Formerly we used the auto-update library but observed bugs. See
     25 +-- "Hasura.Logging" and #10662
     26 +--
     27 +-- NOTE: if we wanted to make this more resilient to this thread being
     28 +-- descheduled for long periods, we could store monotonic timestamp here (fast)
     29 +-- then logging threads could do the same and determine if the time is stale. I
     30 +-- considered doing the same to also get more granular timestamps but it seems
     31 +-- the addUTCTime makes this just as slow as getCurrentTime
     32 +cachedRecentFormattedTimeAndZone :: IORef (Time.UTCTime, Time.TimeZone, B8.ByteString)
     33 +{-# NOINLINE cachedRecentFormattedTimeAndZone #-}
     34 +cachedRecentFormattedTimeAndZone = unsafePerformIO do
     35 + tRef <- getTimeAndZone >>= newIORef
     36 + void $ forkIO $ uninterruptibleMask_ $ forever do
     37 + threadDelay $ 1000 * 1000
     38 + getTimeAndZone >>= writeIORef tRef
     39 + pure tRef
     40 + where
     41 + getTimeAndZone = do
     42 + !tz <- Time.getCurrentTimeZone
     43 + !t <- Time.getCurrentTime
     44 + let !tRFC7231 = B8.pack $ formatTime defaultTimeLocale "%a, %d %b %Y %H:%M:%S GMT" t
     45 + pure (t, tz, tRFC7231)
     46 + 
  • ■ ■ ■ ■
    server/src-lib/Hasura/GC.hs
    skipped 74 lines
    75 75   else do
    76 76   when (areOverdue && not areIdle)
    77 77   $ logger
    78  - $ UnstructuredLog LevelWarn
     78 + $ UnstructuredLog LevelInfo
    79 79   $ "Overdue for a major GC: forcing one even though we don't appear to be idle"
    80 80   performMajorGC
    81 81   startTimer >>= go (gcs + 1) (major_gcs + 1) True
    skipped 6 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Execute/Action.hs
    skipped 362 lines
    363 363   \response -> makeActionResponseNoRelations annFields outputType HashMap.empty False <$> decodeValue response
    364 364   IR.AsyncId -> pure $ AO.String $ actionIdToText actionId
    365 365   IR.AsyncCreatedAt -> pure $ AO.toOrdered $ J.toJSON _alrCreatedAt
    366  - IR.AsyncErrors -> pure $ AO.toOrdered $ J.toJSON $ mkQErrFromErrorValue _alrErrors
     366 + IR.AsyncErrors -> pure $ AO.toOrdered $ J.toJSON $ mkQErrFromErrorValue <$> _alrErrors
    367 367   pure $ encJFromOrderedValue $ AO.object resolvedFields
    368 368   IR.ASISource sourceName sourceConfig ->
    369 369   let jsonAggSelect = mkJsonAggSelect outputType
    skipped 43 lines
    413 413   tablePermissions = RS.TablePerm annBoolExpTrue Nothing
    414 414   in RS.AnnSelectG annotatedFields tableFromExp tablePermissions tableArguments stringifyNumerics Nothing
    415 415   where
    416  - mkQErrFromErrorValue :: Maybe J.Value -> QErr
     416 + mkQErrFromErrorValue :: J.Value -> QErr
    417 417   mkQErrFromErrorValue actionLogResponseError =
    418  - let internal = ExtraInternal <$> (actionLogResponseError >>= (^? key "internal"))
     418 + let internal = ExtraInternal <$> (actionLogResponseError ^? key "internal")
    419 419   internal' = if shouldIncludeInternal (_uiRole userInfo) responseErrorsConfig then internal else Nothing
    420  - errorMessageText = fromMaybe "internal: error in parsing the action log" $ actionLogResponseError >>= (^? key "error" . _String)
    421  - codeMaybe = actionLogResponseError >>= (^? key "code" . _String)
     420 + errorMessageText = fromMaybe "internal: error in parsing the action log" $ actionLogResponseError ^? key "error" . _String
     421 + codeMaybe = actionLogResponseError ^? key "code" . _String
    422 422   code = maybe Unexpected ActionWebhookCode codeMaybe
    423 423   in QErr [] HTTP.status500 errorMessageText code internal'
    424 424   IR.AnnActionAsyncQuery _ actionId outputType asyncFields definitionList stringifyNumerics _ actionSource = annAction
    skipped 476 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Execute/Subscription/Poll/LiveQuery.hs
    skipped 38 lines
    39 39  import Hasura.RQL.Types.Roles (RoleName)
    40 40  import Hasura.RQL.Types.Subscription (SubscriptionType (..))
    41 41  import Hasura.Server.Logging (ModelInfo (..), ModelInfoLog (..))
    42  -import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), liveQuerySubscriptionLabel, recordSubcriptionMetric)
     42 +import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), liveQuerySubscriptionLabel, recordSubscriptionMetric)
    43 43  import Hasura.Server.Types (GranularPrometheusMetricsState (..), ModelInfoLogState (..))
    44 44  import Refined (unrefine)
    45 45  import System.Metrics.Prometheus.Gauge qualified as Prometheus.Gauge
    skipped 75 lines
    121 121   (queryExecutionTime, mxRes) <- runDBSubscription @b sourceConfig query (over (each . _2) C._csVariables cohorts) resolvedConnectionTemplate
    122 122   
    123 123   let dbExecTimeMetric = submDBExecTotalTime $ pmSubscriptionMetrics $ prometheusMetrics
    124  - recordSubcriptionMetric
     124 + recordSubscriptionMetric
    125 125   granularPrometheusMetricsState
    126 126   True
    127 127   operationNamesMap
    skipped 87 lines
    215 215   when (modelInfoLogStatus' == ModelInfoLogOn) $ do
    216 216   for_ (modelInfoList) $ \(ModelInfoPart modelName modelType modelSourceName modelSourceType modelQueryType) -> do
    217 217   L.unLogger logger $ ModelInfoLog L.LevelInfo $ ModelInfo modelName (toTxt modelType) (toTxt <$> modelSourceName) (toTxt <$> modelSourceType) (toTxt modelQueryType) False
    218  - recordSubcriptionMetric
     218 + recordSubscriptionMetric
    219 219   granularPrometheusMetricsState
    220 220   True
    221 221   operationNamesMap
    skipped 33 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Execute/Subscription/Poll/StreamingQuery.hs
    skipped 40 lines
    41 41  import Hasura.RQL.Types.Subscription (SubscriptionType (..))
    42 42  import Hasura.SQL.Value (TxtEncodedVal (..))
    43 43  import Hasura.Server.Logging (ModelInfo (..), ModelInfoLog (..))
    44  -import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), recordSubcriptionMetric, streamingSubscriptionLabel)
     44 +import Hasura.Server.Prometheus (PrometheusMetrics (..), SubscriptionMetrics (..), recordSubscriptionMetric, streamingSubscriptionLabel)
    45 45  import Hasura.Server.Types (GranularPrometheusMetricsState (..), ModelInfoLogState (..))
    46 46  import Language.GraphQL.Draft.Syntax qualified as G
    47 47  import Refined (unrefine)
    skipped 241 lines
    289 289   (over (each . _2) C._csVariables $ fmap (fmap fst) cohorts)
    290 290   resolvedConnectionTemplate
    291 291   let dbExecTimeMetric = submDBExecTotalTime $ pmSubscriptionMetrics $ prometheusMetrics
    292  - recordSubcriptionMetric
     292 + recordSubscriptionMetric
    293 293   granularPrometheusMetricsState
    294 294   True
    295 295   operationNames
    skipped 174 lines
    470 470   unLogger logger $ ModelInfoLog LevelInfo $ ModelInfo modelName (toTxt modelType) (toTxt <$> modelSourceName) (toTxt <$> modelSourceType) (toTxt modelQueryType) False
    471 471   postPollHook pollDetails
    472 472   let totalTimeMetric = submTotalTime $ pmSubscriptionMetrics $ prometheusMetrics
    473  - recordSubcriptionMetric
     473 + recordSubscriptionMetric
    474 474   granularPrometheusMetricsState
    475 475   True
    476 476   operationNames
    skipped 56 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Execute/Subscription/State.hs
    skipped 55 lines
    56 56  import Hasura.SQL.AnyBackend qualified as AB
    57 57  import Hasura.Server.Metrics (ServerMetrics (..))
    58 58  import Hasura.Server.Prometheus
    59  - ( DynamicSubscriptionLabel (..),
     59 + ( DynamicGraphqlOperationLabel (..),
    60 60   PrometheusMetrics (..),
    61 61   SubscriptionLabel (..),
    62 62   SubscriptionMetrics (..),
    skipped 195 lines
    258 258   liftIO $ Prometheus.Gauge.inc $ submActiveLiveQueryPollers $ pmSubscriptionMetrics $ prometheusMetrics
    259 259   
    260 260   liftIO $ EKG.Gauge.inc $ smActiveSubscriptions serverMetrics
    261  - let promMetricGranularLabel = SubscriptionLabel liveQuerySubscriptionLabel (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) operationName)
     261 + let promMetricGranularLabel = SubscriptionLabel liveQuerySubscriptionLabel (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) operationName)
    262 262   promMetricLabel = SubscriptionLabel liveQuerySubscriptionLabel Nothing
    263 263   let numSubscriptionMetric = submActiveSubscriptions $ pmSubscriptionMetrics $ prometheusMetrics
    264 264   recordMetricWithLabel
    skipped 125 lines
    390 390   EKG.Gauge.inc $ smActiveSubscriptions serverMetrics
    391 391   EKG.Gauge.inc $ smActiveStreamingSubscriptions serverMetrics
    392 392   
    393  - let promMetricGranularLabel = SubscriptionLabel streamingSubscriptionLabel (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) operationName)
     393 + let promMetricGranularLabel = SubscriptionLabel streamingSubscriptionLabel (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) operationName)
    394 394   promMetricLabel = SubscriptionLabel streamingSubscriptionLabel Nothing
    395 395   numSubscriptionMetric = submActiveSubscriptions $ pmSubscriptionMetrics $ prometheusMetrics
    396 396   recordMetricWithLabel
    skipped 73 lines
    470 470   <*> TMap.null newOps
    471 471   when cohortIsEmpty $ TMap.delete cohortId cohortMap
    472 472   handlerIsEmpty <- TMap.null cohortMap
    473  - let promMetricGranularLabel = SubscriptionLabel liveQuerySubscriptionLabel (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) maybeOperationName)
     473 + let promMetricGranularLabel = SubscriptionLabel liveQuerySubscriptionLabel (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) maybeOperationName)
    474 474   promMetricLabel = SubscriptionLabel liveQuerySubscriptionLabel Nothing
    475 475   -- when there is no need for handler i.e, this happens to be the last
    476 476   -- operation, take the ref for the polling thread to cancel it
    skipped 92 lines
    569 569   <*> TMap.null newOps
    570 570   when cohortIsEmpty $ TMap.delete currentCohortId cohortMap
    571 571   handlerIsEmpty <- TMap.null cohortMap
    572  - let promMetricGranularLabel = SubscriptionLabel streamingSubscriptionLabel (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) maybeOperationName)
     572 + let promMetricGranularLabel = SubscriptionLabel streamingSubscriptionLabel (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) maybeOperationName)
    573 573   promMetricLabel = SubscriptionLabel streamingSubscriptionLabel Nothing
    574 574   -- when there is no need for handler i.e,
    575 575   -- operation, take the ref for the polling thread to cancel it
    skipped 95 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Transport/HTTP.hs
    skipped 87 lines
    88 88  import Hasura.Server.Logging
    89 89  import Hasura.Server.Logging qualified as L
    90 90  import Hasura.Server.Prometheus
    91  - ( GraphQLRequestMetrics (..),
     91 + ( GranularPrometheusMetricsState,
     92 + GraphQLRequestMetrics (..),
    92 93   PrometheusMetrics (..),
     94 + ResponseStatus (..),
     95 + recordGraphqlOperationMetric,
    93 96   )
    94 97  import Hasura.Server.Telemetry.Counters qualified as Telem
    95 98  import Hasura.Server.Types (HeaderPrecedence, ModelInfoLogState (..), MonadGetPolicies (..), ReadOnlyMode (..), RemoteSchemaResponsePriority (..), RequestId (..))
    skipped 4 lines
    100 103  import Language.GraphQL.Draft.Syntax qualified as G
    101 104  import Network.HTTP.Types qualified as HTTP
    102 105  import Network.Wai.Extended qualified as Wai
    103  -import System.Metrics.Prometheus.Counter qualified as Prometheus.Counter
     106 +import System.Metrics.Prometheus.CounterVector qualified as Prometheus.CounterVector
    104 107  import System.Metrics.Prometheus.Histogram qualified as Prometheus.Histogram
    105 108   
    106 109  -- | Encapsulates a function that stores a query response in the cache.
    skipped 222 lines
    329 332   ResponseInternalErrorsConfig ->
    330 333   m (GQLQueryOperationSuccessLog, HttpResponse (Maybe GQResponse, EncJSON))
    331 334  runGQ env sqlGenCtx sc enableAL readOnlyMode remoteSchemaResponsePriority headerPrecedence prometheusMetrics logger agentLicenseKey reqId userInfo ipAddress reqHeaders queryType reqUnparsed responseErrorsConfig = do
     335 + granularPrometheusMetricsState <- runGetPrometheusMetricsGranularity
    332 336   getModelInfoLogStatus' <- runGetModelInfoLogStatus
    333 337   modelInfoLogStatus <- liftIO getModelInfoLogStatus'
    334 338   let gqlMetrics = pmGraphQLRequestMetrics prometheusMetrics
    335 339   
    336  - (totalTime, (response, parameterizedQueryHash, gqlOpType, modelInfoListForLogging, queryCachedStatus)) <- withElapsedTime $ do
    337  - (reqParsed, runLimits, queryParts) <- Tracing.newSpan "Parse GraphQL" $ observeGQLQueryError gqlMetrics Nothing $ do
     340 + (totalTime, (response, parameterizedQueryHash, gqlOpType, gqlOperationName, modelInfoListForLogging, queryCachedStatus)) <- withElapsedTime $ do
     341 + (reqParsed, runLimits, queryParts) <- Tracing.newSpan "Parse GraphQL" $ observeGQLQueryError granularPrometheusMetricsState gqlMetrics Nothing (_grOperationName reqUnparsed) Nothing $ do
    338 342   -- 1. Run system authorization on the 'reqUnparsed :: GQLReqUnparsed' query.
    339 343   reqParsed <-
    340 344   E.checkGQLExecution userInfo (reqHeaders, ipAddress) enableAL sc reqUnparsed reqId
    skipped 7 lines
    348 352   return (reqParsed, runLimits, queryParts)
    349 353   
    350 354   let gqlOpType = G._todType queryParts
    351  - observeGQLQueryError gqlMetrics (Just gqlOpType) $ do
     355 + let gqlOperationName = getOpNameFromParsedReq reqParsed
     356 + observeGQLQueryError granularPrometheusMetricsState gqlMetrics (Just gqlOpType) gqlOperationName Nothing $ do
    352 357   -- 3. Construct the remainder of the execution plan.
    353 358   let maybeOperationName = _unOperationName <$> getOpNameFromParsedReq reqParsed
    354 359   for_ maybeOperationName $ \nm ->
    skipped 19 lines
    374 379   
    375 380   -- 4. Execute the execution plan producing a 'AnnotatedResponse'.
    376 381   (response, queryCachedStatus, modelInfoFromExecution) <- executePlan reqParsed runLimits execPlan
    377  - return (response, parameterizedQueryHash, gqlOpType, ((modelInfoList <> (modelInfoFromExecution))), queryCachedStatus)
     382 + return (response, parameterizedQueryHash, gqlOpType, gqlOperationName, ((modelInfoList <> (modelInfoFromExecution))), queryCachedStatus)
    378 383   
    379 384   -- 5. Record telemetry
    380 385   recordTimings totalTime response
    381 386   
    382 387   -- 6. Record Prometheus metrics (query successes)
    383  - liftIO $ recordGQLQuerySuccess gqlMetrics totalTime gqlOpType
     388 + liftIO $ recordGQLQuerySuccess granularPrometheusMetricsState gqlMetrics totalTime gqlOperationName parameterizedQueryHash gqlOpType
    384 389   
    385 390   -- 7. Return the response along with logging metadata.
    386 391   let requestSize = LBS.length $ J.encode reqUnparsed
    skipped 216 lines
    603 608   ( MonadIO n,
    604 609   MonadError e n
    605 610   ) =>
     611 + IO GranularPrometheusMetricsState ->
    606 612   GraphQLRequestMetrics ->
    607 613   Maybe G.OperationType ->
     614 + Maybe OperationName ->
     615 + Maybe ParameterizedQueryHash ->
    608 616   n a ->
    609 617   n a
    610  - observeGQLQueryError gqlMetrics mOpType action =
     618 + observeGQLQueryError granularPrometheusMetricsState gqlMetrics mOpType mOpName mQHash action =
    611 619   catchError (fmap Right action) (pure . Left) >>= \case
    612 620   Right result ->
    613 621   pure result
    614 622   Left err -> do
    615  - case mOpType of
    616  - Nothing ->
    617  - liftIO $ Prometheus.Counter.inc (gqlRequestsUnknownFailure gqlMetrics)
    618  - Just opType -> case opType of
    619  - G.OperationTypeQuery ->
    620  - liftIO $ Prometheus.Counter.inc (gqlRequestsQueryFailure gqlMetrics)
    621  - G.OperationTypeMutation ->
    622  - liftIO $ Prometheus.Counter.inc (gqlRequestsMutationFailure gqlMetrics)
    623  - G.OperationTypeSubscription ->
    624  - -- We do not collect metrics for subscriptions at the request level.
    625  - pure ()
     623 + recordGraphqlOperationMetric
     624 + granularPrometheusMetricsState
     625 + mOpType
     626 + Failed
     627 + mOpName
     628 + mQHash
     629 + (Prometheus.CounterVector.inc $ gqlRequests gqlMetrics)
    626 630   throwError err
    627 631   
    628 632   -- Tally and record execution times for successful GraphQL requests.
    629 633   recordGQLQuerySuccess ::
    630  - GraphQLRequestMetrics -> DiffTime -> G.OperationType -> IO ()
    631  - recordGQLQuerySuccess gqlMetrics totalTime = \case
    632  - G.OperationTypeQuery -> liftIO $ do
    633  - Prometheus.Counter.inc (gqlRequestsQuerySuccess gqlMetrics)
    634  - Prometheus.Histogram.observe (gqlExecutionTimeSecondsQuery gqlMetrics) (realToFrac totalTime)
    635  - G.OperationTypeMutation -> liftIO $ do
    636  - Prometheus.Counter.inc (gqlRequestsMutationSuccess gqlMetrics)
    637  - Prometheus.Histogram.observe (gqlExecutionTimeSecondsMutation gqlMetrics) (realToFrac totalTime)
    638  - G.OperationTypeSubscription ->
    639  - -- We do not collect metrics for subscriptions at the request level.
    640  - -- Furthermore, we do not serve GraphQL subscriptions over HTTP.
    641  - pure ()
     634 + IO GranularPrometheusMetricsState -> GraphQLRequestMetrics -> DiffTime -> Maybe OperationName -> ParameterizedQueryHash -> G.OperationType -> IO ()
     635 + recordGQLQuerySuccess granularPrometheusMetricsState gqlMetrics totalTime opName qHash opType = do
     636 + recordGraphqlOperationMetric
     637 + granularPrometheusMetricsState
     638 + (Just opType)
     639 + Success
     640 + opName
     641 + (Just qHash)
     642 + (Prometheus.CounterVector.inc $ gqlRequests gqlMetrics)
     643 + case opType of
     644 + G.OperationTypeQuery -> liftIO $ Prometheus.Histogram.observe (gqlExecutionTimeSecondsQuery gqlMetrics) (realToFrac totalTime)
     645 + G.OperationTypeMutation -> liftIO $ Prometheus.Histogram.observe (gqlExecutionTimeSecondsMutation gqlMetrics) (realToFrac totalTime)
     646 + G.OperationTypeSubscription ->
     647 + -- We do not collect metrics for subscriptions at the request level.
     648 + -- Furthermore, we do not serve GraphQL subscriptions over HTTP.
     649 + pure ()
    642 650   
    643 651  coalescePostgresMutations ::
    644 652   EB.ExecutionPlan ->
    skipped 198 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Transport/WebSocket/Server.hs
    skipped 71 lines
    72 72  import Hasura.Server.Cors (CorsPolicy)
    73 73  import Hasura.Server.Init.Config (AllowListStatus (..), WSConnectionInitTimeout (..))
    74 74  import Hasura.Server.Prometheus
    75  - ( DynamicSubscriptionLabel (..),
     75 + ( DynamicGraphqlOperationLabel (..),
    76 76   PrometheusMetrics (..),
    77 77   recordMetricWithLabel,
    78 78   )
    skipped 560 lines
    639 639   messageDetails = MessageDetails (SB.fromLBS msg) messageLength
    640 640   parameterizedQueryHash = wsInfo >>= _wseiParameterizedQueryHash
    641 641   operationName = wsInfo >>= _wseiOperationName
    642  - promMetricGranularLabel = DynamicSubscriptionLabel parameterizedQueryHash operationName
    643  - promMetricLabel = DynamicSubscriptionLabel Nothing Nothing
     642 + promMetricGranularLabel = DynamicGraphqlOperationLabel parameterizedQueryHash operationName
     643 + promMetricLabel = DynamicGraphqlOperationLabel Nothing Nothing
    644 644   websocketBytesSentMetric = pmWebSocketBytesSent prometheusMetrics
    645 645   granularPrometheusMetricsState <- runGetPrometheusMetricsGranularity
    646 646   liftIO $ do
    skipped 56 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/GraphQL/Transport/WebSocket.hs
    skipped 100 lines
    101 101  import Hasura.Server.Prometheus
    102 102   ( GraphQLRequestMetrics (..),
    103 103   PrometheusMetrics (..),
     104 + ResponseStatus (..),
     105 + recordGraphqlOperationMetric,
    104 106   )
    105 107  import Hasura.Server.Telemetry.Counters qualified as Telem
    106 108  import Hasura.Server.Types (GranularPrometheusMetricsState (..), HeaderPrecedence, ModelInfoLogState (..), MonadGetPolicies (..), RemoteSchemaResponsePriority, RequestId, getRequestId)
    skipped 8 lines
    115 117  import Network.WebSockets qualified as WS
    116 118  import Refined (unrefine)
    117 119  import StmContainers.Map qualified as STMMap
    118  -import System.Metrics.Prometheus.Counter qualified as Prometheus.Counter
     120 +import System.Metrics.Prometheus.CounterVector qualified as Prometheus.CounterVector
    119 121  import System.Metrics.Prometheus.Histogram qualified as Prometheus.Histogram
    120 122   
    121 123  -- | 'ES.SubscriberDetails' comes from 'Hasura.GraphQL.Execute.LiveQuery.State.addLiveQuery'. We use
    skipped 329 lines
    451 453  onStart enabledLogTypes agentLicenseKey serverEnv wsConn shouldCaptureVariables (StartMsg opId q) onMessageActions responseErrorsConfig headerPrecedence = catchAndIgnore $ do
    452 454   modelInfoLogStatus' <- runGetModelInfoLogStatus
    453 455   modelInfoLogStatus <- liftIO modelInfoLogStatus'
     456 + granularPrometheusMetricsState <- runGetPrometheusMetricsGranularity
    454 457   timerTot <- startTimer
    455 458   op <- liftIO $ STM.atomically $ STMMap.lookup opId opMap
    456 459   
    skipped 1 lines
    458 461   -- we process all operations on a websocket connection serially:
    459 462   when (isJust op)
    460 463   $ withComplete
    461  - $ sendStartErr
     464 + $ sendStartErr granularPrometheusMetricsState (snd =<< op)
    462 465   $ "an operation already exists with this id: "
    463 466   <> unOperationId opId
    464 467   
    skipped 2 lines
    467 470   CSInitialised WsClientState {..} -> return (wscsUserInfo, wscsReqHeaders, wscsIpAddress)
    468 471   CSInitError initErr -> do
    469 472   let e = "cannot start as connection_init failed with: " <> initErr
    470  - withComplete $ sendStartErr e
     473 + withComplete $ sendStartErr granularPrometheusMetricsState (_grOperationName q) e
    471 474   CSNotInitialised _ _ -> do
    472 475   let e = "start received before the connection is initialised"
    473  - withComplete $ sendStartErr e
     476 + withComplete $ sendStartErr granularPrometheusMetricsState (_grOperationName q) e
    474 477   
    475 478   (requestId, reqHdrs) <- liftIO $ getRequestId origReqHdrs
    476 479   sc <- liftIO $ getSchemaCacheWithVersion appStateRef
    skipped 11 lines
    488 491   
    489 492   (reqParsed, queryParts) <- Tracing.newSpan "Parse GraphQL" $ do
    490 493   reqParsedE <- lift $ E.checkGQLExecution userInfo (reqHdrs, ipAddress) enableAL sc q requestId
    491  - reqParsed <- onLeft reqParsedE (withComplete . preExecErr requestId Nothing)
     494 + reqParsed <- onLeft reqParsedE (withComplete . preExecErr granularPrometheusMetricsState requestId Nothing (_grOperationName q) Nothing)
    492 495   queryPartsE <- runExceptT $ getSingleOperation reqParsed
    493  - queryParts <- onLeft queryPartsE (withComplete . preExecErr requestId Nothing)
     496 + queryParts <- onLeft queryPartsE (withComplete . preExecErr granularPrometheusMetricsState requestId Nothing (getOpNameFromParsedReq reqParsed) Nothing)
    494 497   pure (reqParsed, queryParts)
    495 498   
    496 499   let gqlOpType = G._todType queryParts
    skipped 22 lines
    519 522   responseErrorsConfig
    520 523   headerPrecedence
    521 524   
    522  - (parameterizedQueryHash, execPlan, modelInfoList) <- onLeft execPlanE (withComplete . preExecErr requestId (Just gqlOpType))
     525 + (parameterizedQueryHash, execPlan, modelInfoList) <- onLeft execPlanE (withComplete . preExecErr granularPrometheusMetricsState requestId (Just gqlOpType) opName Nothing)
    523 526   
    524 527   case execPlan of
    525 528   E.QueryExecutionPlan queryPlan asts dirMap -> do
    skipped 9 lines
    535 538   ResponseCached cachedResponseData -> do
    536 539   logQueryLog logger $ QueryLog q Nothing requestId QueryLogKindCached
    537 540   let reportedExecutionTime = 0
    538  - liftIO $ recordGQLQuerySuccess reportedExecutionTime gqlOpType
     541 + liftIO $ recordGQLQuerySuccess granularPrometheusMetricsState reportedExecutionTime opName parameterizedQueryHash gqlOpType
    539 542   modelInfoLogging modelInfoList True modelInfoLogStatus
    540 543   sendSuccResp cachedResponseData opName parameterizedQueryHash $ ES.SubscriptionMetadata reportedExecutionTime
    541 544   ResponseUncached storeResponseM -> do
    skipped 40 lines
    582 585   let (allResponses', allModelInfo) = unzip allResponses
    583 586   pure $ (AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse allResponses')) [], concat allModelInfo)
    584 587   in getResponse
    585  - sendResultFromFragments Telem.Query timerTot requestId conclusion opName parameterizedQueryHash gqlOpType modelInfoList modelInfoLogStatus
     588 + sendResultFromFragments granularPrometheusMetricsState Telem.Query timerTot requestId conclusion opName parameterizedQueryHash gqlOpType modelInfoList modelInfoLogStatus
    586 589   case (storeResponseM, conclusion) of
    587 590   (Just ResponseCacher {..}, Right results) -> do
    588 591   let (key, (compositeValue')) = unzip $ InsOrdHashMap.toList results
    skipped 19 lines
    608 611   $ doQErr
    609 612   $ runPGMutationTransaction requestId q userInfo logger sourceConfig resolvedConnectionTemplate pgMutations
    610 613   -- we do not construct result fragments since we have only one result
    611  - handleResult requestId gqlOpType resp \(telemTimeIO_DT, results) -> do
     614 + handleResult granularPrometheusMetricsState requestId gqlOpType opName parameterizedQueryHash resp \(telemTimeIO_DT, results) -> do
    612 615   let telemQueryType = Telem.Query
    613 616   telemLocality = Telem.Local
    614 617   telemTimeIO = convertDuration telemTimeIO_DT
    skipped 3 lines
    618 621   $ ES.SubscriptionMetadata telemTimeIO_DT
    619 622   -- Telemetry. NOTE: don't time network IO:
    620 623   Telem.recordTimingMetric Telem.RequestDimensions {..} Telem.RequestTimings {..}
    621  - liftIO $ recordGQLQuerySuccess totalTime gqlOpType
     624 + liftIO $ recordGQLQuerySuccess granularPrometheusMetricsState totalTime opName parameterizedQueryHash gqlOpType
    622 625   
    623 626   -- we are not in the transaction case; proceeding normally
    624 627   Nothing -> do
    skipped 41 lines
    666 669   let (allResponses', allModelInfo) = unzip allResponses
    667 670   pure $ (AnnotatedResponsePart 0 Telem.Local (encJFromList (map arpResponse allResponses')) [], concat allModelInfo)
    668 671   in getResponse
    669  - sendResultFromFragments Telem.Query timerTot requestId conclusion opName parameterizedQueryHash gqlOpType modelInfoList modelInfoLogStatus
     672 + sendResultFromFragments granularPrometheusMetricsState Telem.Query timerTot requestId conclusion opName parameterizedQueryHash gqlOpType modelInfoList modelInfoLogStatus
    670 673   liftIO $ sendCompleted (Just requestId) (Just parameterizedQueryHash)
    671 674   E.SubscriptionExecutionPlan (subExec, modifier) -> do
    672 675   case subExec of
    skipped 45 lines
    718 721   asyncActionQueryLive
    719 722   E.SEOnSourceDB (E.SSLivequery actionIds liveQueryBuilder) -> do
    720 723   actionLogMapE <- fmap fst <$> runExceptT (EA.fetchActionLogResponses actionIds)
    721  - actionLogMap <- onLeft actionLogMapE (withComplete . preExecErr requestId (Just gqlOpType))
    722  - granularPrometheusMetricsState <- runGetPrometheusMetricsGranularity
     724 + actionLogMap <- onLeft actionLogMapE (withComplete . preExecErr granularPrometheusMetricsState requestId (Just gqlOpType) opName (Just parameterizedQueryHash))
    723 725   modelInfoLogStatus'' <- runGetModelInfoLogStatus
    724 726   opMetadataE <- liftIO $ startLiveQuery opName liveQueryBuilder parameterizedQueryHash requestId actionLogMap granularPrometheusMetricsState modifier modelInfoLogStatus''
    725  - lqId <- onLeft opMetadataE (withComplete . preExecErr requestId (Just gqlOpType))
     727 + lqId <- onLeft opMetadataE (withComplete . preExecErr granularPrometheusMetricsState requestId (Just gqlOpType) opName (Just parameterizedQueryHash))
    726 728   -- Update async action query subscription state
    727 729   case NE.nonEmpty (toList actionIds) of
    728 730   Nothing -> do
    skipped 18 lines
    747 749   onUnexpectedException
    748 750   asyncActionQueryLive
    749 751   E.SEOnSourceDB (E.SSStreaming rootFieldName streamQueryBuilder) -> do
    750  - granularPrometheusMetricsState <- runGetPrometheusMetricsGranularity
    751 752   modelInfoLogStatus'' <- runGetModelInfoLogStatus
    752 753   liftIO $ startStreamingQuery rootFieldName streamQueryBuilder parameterizedQueryHash requestId granularPrometheusMetricsState modifier modelInfoLogStatus''
    753 754   
    754  - liftIO $ Prometheus.Counter.inc (gqlRequestsSubscriptionSuccess gqlMetrics)
     755 + recordGraphqlOperationMetric
     756 + granularPrometheusMetricsState
     757 + (Just G.OperationTypeSubscription)
     758 + Success
     759 + opName
     760 + (Just parameterizedQueryHash)
     761 + (Prometheus.CounterVector.inc $ gqlRequests gqlMetrics)
    755 762   liftIO $ logOpEv ODStarted (Just requestId) (Just parameterizedQueryHash)
    756 763   where
    757 764   sendDataMsg = WS._wsaGetDataMessageType onMessageActions
    skipped 29 lines
    787 794   
    788 795   handleResult ::
    789 796   forall a.
     797 + IO GranularPrometheusMetricsState ->
    790 798   RequestId ->
    791 799   G.OperationType ->
     800 + Maybe OperationName ->
     801 + ParameterizedQueryHash ->
    792 802   Either (Either GQExecError QErr) a ->
    793 803   (a -> ExceptT () m ()) ->
    794 804   ExceptT () m ()
    795  - handleResult requestId gqlOpType r f = case r of
    796  - Left (Left err) -> postExecErr' gqlOpType err
    797  - Left (Right err) -> postExecErr requestId gqlOpType err
     805 + handleResult granularPrometheusMetricsState requestId gqlOpType mOpName pqh r f = case r of
     806 + Left (Left err) -> postExecErr' granularPrometheusMetricsState gqlOpType mOpName pqh err
     807 + Left (Right err) -> postExecErr granularPrometheusMetricsState requestId gqlOpType mOpName pqh err
    798 808   Right results -> f results
    799 809   
    800  - sendResultFromFragments telemQueryType timerTot requestId r opName pqh gqlOpType modelInfoList getModelInfoLogStatus =
    801  - handleResult requestId gqlOpType r \results -> do
     810 + sendResultFromFragments granularPrometheusMetricsState telemQueryType timerTot requestId r opName pqh gqlOpType modelInfoList getModelInfoLogStatus =
     811 + handleResult granularPrometheusMetricsState requestId gqlOpType opName pqh r \results -> do
    802 812   let (key, (compositeValue')) = unzip $ InsOrdHashMap.toList results
    803 813   (annotatedResp, model) = unzip compositeValue'
    804 814   results' = InsOrdHashMap.fromList $ zip key annotatedResp
    skipped 9 lines
    814 824   -- Telemetry. NOTE: don't time network IO:
    815 825   Telem.recordTimingMetric Telem.RequestDimensions {..} Telem.RequestTimings {..}
    816 826   modelInfoLogging (modelInfoList <> modelInfoList') False getModelInfoLogStatus
    817  - liftIO $ (recordGQLQuerySuccess totalTime gqlOpType)
     827 + liftIO $ (recordGQLQuerySuccess granularPrometheusMetricsState totalTime opName pqh gqlOpType)
    818 828   
    819 829   runRemoteGQ ::
    820 830   RequestId ->
    skipped 64 lines
    885 895   getErrFn ERTLegacy = encodeQErr
    886 896   getErrFn ERTGraphqlCompliant = encodeGQLErr
    887 897   
    888  - sendStartErr e = do
     898 + sendStartErr granularPrometheusMetricsState mOpName e = do
    889 899   let errFn = getErrFn errRespTy
    890 900   sendMsg wsConn
    891 901   $ SMErr
    skipped 1 lines
    893 903   $ errFn False
    894 904   $ err400 StartFailed e
    895 905   liftIO $ logOpEv (ODProtoErr e) Nothing Nothing
    896  - liftIO $ reportGQLQueryError Nothing
     906 + liftIO $ reportGQLQueryError granularPrometheusMetricsState mOpName Nothing Nothing
    897 907   liftIO $ closeConnAction wsConn opId (T.unpack e)
    898 908   
    899 909   sendCompleted reqId paramQueryHash = do
    skipped 1 lines
    901 911   logOpEv ODCompleted reqId paramQueryHash
    902 912   
    903 913   postExecErr ::
     914 + IO GranularPrometheusMetricsState ->
    904 915   RequestId ->
    905 916   G.OperationType ->
     917 + Maybe OperationName ->
     918 + ParameterizedQueryHash ->
    906 919   QErr ->
    907 920   ExceptT () m ()
    908  - postExecErr reqId gqlOpType qErr = do
     921 + postExecErr granularPrometheusMetricsState reqId gqlOpType mOpName pqh qErr = do
    909 922   let errFn = getErrFn errRespTy False
    910 923   liftIO $ logOpEv (ODQueryErr qErr) (Just reqId) Nothing
    911  - postExecErr' gqlOpType $ GQExecError $ pure $ errFn qErr
     924 + postExecErr' granularPrometheusMetricsState gqlOpType mOpName pqh $ GQExecError $ pure $ errFn qErr
    912 925   
    913  - postExecErr' :: G.OperationType -> GQExecError -> ExceptT () m ()
    914  - postExecErr' gqlOpType qErr =
     926 + postExecErr' :: IO GranularPrometheusMetricsState -> G.OperationType -> Maybe OperationName -> ParameterizedQueryHash -> GQExecError -> ExceptT () m ()
     927 + postExecErr' granularPrometheusMetricsState gqlOpType mOpName pqh qErr =
    915 928   liftIO $ do
    916  - reportGQLQueryError (Just gqlOpType)
     929 + reportGQLQueryError granularPrometheusMetricsState mOpName (Just pqh) (Just gqlOpType)
    917 930   postExecErrAction wsConn opId qErr
    918 931   
    919 932   -- why wouldn't pre exec error use graphql response?
    920  - preExecErr reqId mGqlOpType qErr = do
    921  - liftIO $ reportGQLQueryError mGqlOpType
     933 + preExecErr granularPrometheusMetricsState reqId mGqlOpType mOpName pqh qErr = do
     934 + liftIO $ reportGQLQueryError granularPrometheusMetricsState mOpName pqh mGqlOpType
    922 935   liftIO $ sendError reqId qErr
    923 936   
    924 937   sendError reqId qErr = do
    skipped 124 lines
    1049 1062   catchAndIgnore :: ExceptT () m () -> m ()
    1050 1063   catchAndIgnore m = void $ runExceptT m
    1051 1064   
    1052  - reportGQLQueryError :: Maybe G.OperationType -> IO ()
    1053  - reportGQLQueryError = \case
    1054  - Nothing ->
    1055  - liftIO $ Prometheus.Counter.inc (gqlRequestsUnknownFailure gqlMetrics)
    1056  - Just opType -> case opType of
    1057  - G.OperationTypeQuery ->
    1058  - liftIO $ Prometheus.Counter.inc (gqlRequestsQueryFailure gqlMetrics)
    1059  - G.OperationTypeMutation ->
    1060  - liftIO $ Prometheus.Counter.inc (gqlRequestsMutationFailure gqlMetrics)
    1061  - G.OperationTypeSubscription ->
    1062  - liftIO $ Prometheus.Counter.inc (gqlRequestsSubscriptionFailure gqlMetrics)
     1065 + reportGQLQueryError :: IO GranularPrometheusMetricsState -> Maybe OperationName -> Maybe ParameterizedQueryHash -> Maybe G.OperationType -> IO ()
     1066 + reportGQLQueryError granularPrometheusMetricsState mOpName mQHash mOpType =
     1067 + recordGraphqlOperationMetric
     1068 + granularPrometheusMetricsState
     1069 + mOpType
     1070 + Failed
     1071 + mOpName
     1072 + mQHash
     1073 + (Prometheus.CounterVector.inc $ gqlRequests gqlMetrics)
    1063 1074   
    1064 1075   -- Tally and record execution times for successful GraphQL requests.
    1065  - recordGQLQuerySuccess :: DiffTime -> G.OperationType -> IO ()
    1066  - recordGQLQuerySuccess totalTime = \case
    1067  - G.OperationTypeQuery -> liftIO $ do
    1068  - Prometheus.Counter.inc (gqlRequestsQuerySuccess gqlMetrics)
    1069  - Prometheus.Histogram.observe (gqlExecutionTimeSecondsQuery gqlMetrics) (realToFrac totalTime)
    1070  - G.OperationTypeMutation -> liftIO $ do
    1071  - Prometheus.Counter.inc (gqlRequestsMutationSuccess gqlMetrics)
    1072  - Prometheus.Histogram.observe (gqlExecutionTimeSecondsMutation gqlMetrics) (realToFrac totalTime)
    1073  - G.OperationTypeSubscription ->
    1074  - -- We do not collect metrics for subscriptions at the request level.
    1075  - pure ()
     1076 + recordGQLQuerySuccess :: IO GranularPrometheusMetricsState -> DiffTime -> Maybe OperationName -> ParameterizedQueryHash -> G.OperationType -> IO ()
     1077 + recordGQLQuerySuccess granularPrometheusMetricsState totalTime mOpName qHash opType = do
     1078 + recordGraphqlOperationMetric
     1079 + granularPrometheusMetricsState
     1080 + (Just opType)
     1081 + Success
     1082 + mOpName
     1083 + (Just qHash)
     1084 + (Prometheus.CounterVector.inc $ gqlRequests gqlMetrics)
     1085 + case opType of
     1086 + G.OperationTypeQuery -> liftIO $ Prometheus.Histogram.observe (gqlExecutionTimeSecondsQuery gqlMetrics) (realToFrac totalTime)
     1087 + G.OperationTypeMutation -> liftIO $ Prometheus.Histogram.observe (gqlExecutionTimeSecondsMutation gqlMetrics) (realToFrac totalTime)
     1088 + G.OperationTypeSubscription ->
     1089 + -- We do not collect metrics for subscriptions at the request level.
     1090 + pure ()
    1076 1091   
    1077 1092  onMessage ::
    1078 1093   ( MonadIO m,
    skipped 209 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Logging.hs
    skipped 59 lines
    60 60   )
    61 61  where
    62 62   
    63  -import Control.AutoUpdate qualified as Auto
    64 63  import Control.Exception (ErrorCall (ErrorCallWithLocation), catch)
    65 64  import Control.FoldDebounce qualified as FDebounce
    66 65  import Control.Monad.Trans.Control
    skipped 4 lines
    71 70  import Data.ByteString.Lazy qualified as BL
    72 71  import Data.ByteString.Lazy.Char8 qualified as BLC
    73 72  import Data.HashSet qualified as Set
     73 +import Data.IORef
    74 74  import Data.Map.Strict (Map)
    75 75  import Data.Map.Strict qualified as Map
    76 76  import Data.SerializableBlob qualified as SB
    skipped 4 lines
    81 81  import Data.Time.Format qualified as Format
    82 82  import Data.Time.LocalTime qualified as Time
    83 83  import Hasura.Base.Error (QErr)
     84 +import Hasura.CachedTime
    84 85  import Hasura.Prelude
    85 86  import Hasura.Tracing.Class qualified as Tracing
    86 87  import Hasura.Tracing.Context
    skipped 271 lines
    358 359  -- * LoggerSettings
    359 360   
    360 361  data LoggerSettings = LoggerSettings
    361  - { -- | should current time be cached (refreshed every sec)
     362 + { -- | should current time be cached (refreshed every sec)? For performance
     363 + -- impact, see benchmarks in: https://github.com/hasura/graphql-engine-mono/pull/10631
    362 364   _lsCachedTimestamp :: !Bool,
    363 365   _lsTimeZone :: !(Maybe Time.TimeZone),
    364 366   _lsLevel :: !LogLevel
    skipped 11 lines
    376 378   t <- Time.getCurrentTime
    377 379   return $ FormattedTime t tz
    378 380   
     381 +-- | Get the current time, formatted with the current or specified timezone
     382 +getCachedFormattedTime :: Maybe Time.TimeZone -> IO FormattedTime
     383 +getCachedFormattedTime tzM = do
     384 + (t, tz, _) <- readIORef cachedRecentFormattedTimeAndZone
     385 + pure $ maybe (FormattedTime t tz) (FormattedTime t) tzM
     386 + 
    379 387  -- | Creates a new 'LoggerCtx', optionally fanning out to an OTLP endpoint
    380 388  -- (while enabled) as well.
    381 389  --
    skipped 10 lines
    392 400   LoggerSettings ->
    393 401   Set.HashSet (EngineLogType impl) ->
    394 402   ManagedT io (LoggerCtx impl)
    395  -mkLoggerCtxOTLP logsExporter (LoggerSettings cacheTime tzM logLevel) enabledLogs = do
     403 +mkLoggerCtxOTLP logsExporter (LoggerSettings shouldCacheTime tzM logLevel) enabledLogs = do
    396 404   loggerSet <- allocate acquire release
    397  - timeGetter <- liftIO $ bool (pure $ getFormattedTime tzM) cachedTimeGetter cacheTime
    398  - pure $ LoggerCtx loggerSet logLevel timeGetter enabledLogs logsExporter
     405 + pure $ LoggerCtx loggerSet logLevel (timeGetter tzM) enabledLogs logsExporter
    399 406   where
    400 407   acquire = liftIO do
    401 408   FL.newStdoutLoggerSet FL.defaultBufSize
    402 409   release loggerSet = liftIO do
    403 410   FL.flushLogStr loggerSet
    404 411   FL.rmLoggerSet loggerSet
    405  - cachedTimeGetter =
    406  - Auto.mkAutoUpdate
    407  - Auto.defaultUpdateSettings
    408  - { Auto.updateAction = getFormattedTime tzM
    409  - }
     412 + -- use either a slower time lookup per log line, or quick reference to not
     413 + -- very granular current-ish timestamp
     414 + timeGetter
     415 + | shouldCacheTime = getCachedFormattedTime
     416 + | otherwise = getFormattedTime
    410 417   
    411 418  -- | 'mkLoggerCtxOTLP' but with no otlp log shipping, for compatibility
    412 419  mkLoggerCtx ::
    skipped 143 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/RQL/DDL/Schema/Cache.hs
    skipped 1592 lines
    1593 1593   then do
    1594 1594   recreateTriggerIfNeeded
    1595 1595   -<
    1596  - ( dynamicConfig,
     1596 + ( (_cdcSQLGenCtx dynamicConfig),
    1597 1597   table,
    1598 1598   tableColumns,
    1599 1599   triggerName,
    skipped 29 lines
    1629 1629   -- computation will not be done again.
    1630 1630   Inc.cache
    1631 1631   proc
    1632  - ( dynamicConfig,
     1632 + ( sqlGenCtx,
    1633 1633   tableName,
    1634 1634   tableColumns,
    1635 1635   triggerName,
    skipped 7 lines
    1643 1643   -< do
    1644 1644   liftEitherM
    1645 1645   $ createTableEventTrigger @b
    1646  - (_cdcSQLGenCtx dynamicConfig)
     1646 + sqlGenCtx
    1647 1647   sourceConfig
    1648 1648   tableName
    1649 1649   tableColumns
    skipped 116 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Server/App.hs
    skipped 101 lines
    102 102  import Hasura.Server.Init
    103 103  import Hasura.Server.Limits
    104 104  import Hasura.Server.Logging
    105  -import Hasura.Server.Middleware (corsMiddleware)
     105 +import Hasura.Server.Middleware
    106 106  import Hasura.Server.OpenAPI (buildOpenAPI)
    107 107  import Hasura.Server.Rest
    108 108  import Hasura.Server.Types
    skipped 236 lines
    345 345   
    346 346   let getInfo parsedRequest = do
    347 347   authenticationResp <- lift (resolveUserInfo (_lsLogger appEnvLoggers) appEnvManager headers acAuthMode parsedRequest)
    348  - authInfo <- onLeft authenticationResp (logErrorAndResp Nothing requestId req (reqBody, Nothing) False origHeaders (ExtraUserInfo Nothing) . qErrModifier)
     348 + authInfo <- authenticationResp `onLeft` (logErrorAndResp Nothing requestId req (reqBody, Nothing) False Nothing origHeaders (ExtraUserInfo Nothing) . qErrModifier)
    349 349   let (userInfo, _, authHeaders, extraUserInfo) = authInfo
    350 350   appContext <- liftIO $ getAppContext appStateRef
    351 351   schemaCache <- liftIO $ getRebuildableSchemaCacheWithVersion appStateRef
    skipped 20 lines
    372 372   (userInfo, authHeaders, handlerState, includeInternal, extraUserInfo) <- getInfo Nothing
    373 373   (queryJSON, parsedReq) <-
    374 374   runExcept (parseBody reqBody) `onLeft` \e -> do
    375  - logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) includeInternal origHeaders extraUserInfo (qErrModifier e)
     375 + logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) includeInternal Nothing origHeaders extraUserInfo (qErrModifier e)
    376 376   res <- lift $ runHandler (_lsLogger appEnvLoggers) handlerState $ handler parsedReq
    377 377   pure (res, userInfo, authHeaders, includeInternal, Just queryJSON, extraUserInfo)
    378 378   -- in this case we parse the request _first_ and then send the request to the webhook for auth
    skipped 3 lines
    382 382   -- if the request fails to parse, call the webhook without a request body
    383 383   -- TODO should we signal this to the webhook somehow?
    384 384   (userInfo, _, _, _, extraUserInfo) <- getInfo Nothing
    385  - logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) False origHeaders extraUserInfo (qErrModifier e)
     385 + logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) False Nothing origHeaders extraUserInfo (qErrModifier e)
    386 386   (userInfo, authHeaders, handlerState, includeInternal, extraUserInfo) <- getInfo (Just parsedReq)
    387 387   
    388 388   res <- lift $ runHandler (_lsLogger appEnvLoggers) handlerState $ handler parsedReq
    skipped 4 lines
    393 393   -- if the request fails to parse, call the webhook without a request body
    394 394   -- TODO should we signal this to the webhook somehow?
    395 395   (userInfo, _, _, _, extraUserInfo) <- getInfo Nothing
    396  - logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) False origHeaders extraUserInfo (qErrModifier e)
     396 + logErrorAndResp (Just userInfo) requestId req (reqBody, Nothing) False Nothing origHeaders extraUserInfo (qErrModifier e)
    397 397   let newReq = case parsedReq of
    398 398   EqrGQLReq reqText -> Just reqText
    399 399   -- Note: We send only `ReqsText` to the webhook in case of `ExtPersistedQueryRequest` (persisted queries),
    skipped 6 lines
    406 406   res <- lift $ runHandler (_lsLogger appEnvLoggers) handlerState $ handler parsedReq
    407 407   pure (res, userInfo, authHeaders, includeInternal, Just queryJSON, extraUserInfo)
    408 408   
     409 + let queryTime = Just (ioWaitTime, serviceTime)
     410 + 
    409 411   -- https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/span-general/#general-identity-attributes
    410 412   lift $ Tracing.attachMetadata [("enduser.role", roleNameToTxt $ _uiRole userInfo)]
    411 413   
    skipped 3 lines
    415 417   -- log and return result
    416 418   case modResult of
    417 419   Left err ->
    418  - logErrorAndResp (Just userInfo) requestId req (reqBody, queryJSON) includeInternal headers extraUserInfo err
     420 + logErrorAndResp (Just userInfo) requestId req (reqBody, queryJSON) includeInternal queryTime headers extraUserInfo err
    419 421   Right (httpLogGraphQLInfo, res) -> do
    420 422   let httpLogMetadata = buildHttpLogMetadata @m httpLogGraphQLInfo extraUserInfo
    421  - logSuccessAndResp (Just userInfo) requestId req (reqBody, queryJSON) res (Just (ioWaitTime, serviceTime)) origHeaders authHeaders httpLogMetadata
     423 + logSuccessAndResp (Just userInfo) requestId req (reqBody, queryJSON) res queryTime origHeaders authHeaders httpLogMetadata
    422 424   where
    423 425   logErrorAndResp ::
    424 426   forall any ctx.
    skipped 2 lines
    427 429   Wai.Request ->
    428 430   (BL.ByteString, Maybe Value) ->
    429 431   Bool ->
     432 + Maybe (DiffTime, DiffTime) ->
    430 433   [HTTP.Header] ->
    431 434   ExtraUserInfo ->
    432 435   QErr ->
    433 436   Spock.ActionCtxT ctx m any
    434  - logErrorAndResp userInfo reqId waiReq req includeInternal headers extraUserInfo qErr = do
     437 + logErrorAndResp userInfo reqId waiReq req includeInternal qTime headers extraUserInfo qErr = do
    435 438   AppEnv {..} <- lift askAppEnv
    436 439   let httpLogMetadata = buildHttpLogMetadata @m emptyHttpLogGraphQLInfo extraUserInfo
    437 440   jsonResponse = J.encodingToLazyByteString $ qErrEncoder includeInternal qErr
    skipped 1 lines
    439 442   allHeaders = [contentLength, jsonHeader]
    440 443   -- https://opentelemetry.io/docs/reference/specification/trace/semantic_conventions/http/#common-attributes
    441 444   lift $ Tracing.attachMetadata [("http.response_content_length", bsToTxt $ snd contentLength)]
    442  - lift $ logHttpError (_lsLogger appEnvLoggers) appEnvLoggingSettings userInfo reqId waiReq req qErr headers httpLogMetadata True
     445 + lift $ logHttpError (_lsLogger appEnvLoggers) appEnvLoggingSettings userInfo reqId waiReq req qErr qTime Nothing headers httpLogMetadata True
    443 446   mapM_ setHeader allHeaders
    444 447   Spock.setStatus $ qeStatus qErr
    445 448   Spock.lazyBytes jsonResponse
    skipped 434 lines
    880 883   Spock.middleware
    881 884   $ corsMiddleware (acCorsPolicy <$> getAppContext appStateRef)
    882 885   
     886 + -- bypass warp's use of 'auto-update'. See #10662
     887 + Spock.middleware dateHeaderMiddleware
     888 + 
    883 889   -- API Console and Root Dir
    884 890   serveApiConsole
    885 891   
    skipped 254 lines
    1140 1146   (reqId, _newHeaders) <- getRequestId headers
    1141 1147   -- setting the bool flag countDataTransferBytes to False here since we don't want to count the data
    1142 1148   -- transfer bytes for requests to `/heatlhz` and `/v1/version` endpoints
    1143  - lift $ logHttpError logger appEnvLoggingSettings Nothing reqId req (reqBody, Nothing) err headers (emptyHttpLogMetadata @m) False
     1149 + lift $ logHttpError logger appEnvLoggingSettings Nothing reqId req (reqBody, Nothing) err Nothing Nothing headers (emptyHttpLogMetadata @m) False
    1144 1150   
    1145 1151   spockAction ::
    1146 1152   forall a.
    skipped 65 lines
    1212 1218   (reqId, _newHeaders) <- getRequestId $ Wai.requestHeaders req
    1213 1219   -- setting the bool flag countDataTransferBytes to False here since we don't want to count the data
    1214 1220   -- transfer bytes for requests to undefined resources
    1215  - lift $ logHttpError logger loggingSetting Nothing reqId req (reqBody, Nothing) qErr headers (emptyHttpLogMetadata @m) False
     1221 + lift $ logHttpError logger loggingSetting Nothing reqId req (reqBody, Nothing) qErr Nothing Nothing headers (emptyHttpLogMetadata @m) False
    1216 1222   setHeader jsonHeader
    1217 1223   Spock.setStatus $ qeStatus qErr
    1218 1224   Spock.lazyBytes $ encode qErr
    skipped 1 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Server/Logging.hs
    skipped 305 lines
    306 306   (BL.ByteString, Maybe J.Value) ->
    307 307   -- | the error
    308 308   QErr ->
     309 + -- | IO/network wait time and service time (respectively) for this request, if available.
     310 + Maybe (DiffTime, DiffTime) ->
     311 + -- | possible compression type
     312 + Maybe CompressionType ->
    309 313   -- | list of request headers
    310 314   [HTTP.Header] ->
    311 315   HttpLogMetadata m ->
    skipped 36 lines
    348 352   buildExtraHttpLogMetadata a = buildExtraHttpLogMetadata @m a
    349 353   emptyExtraHttpLogMetadata = emptyExtraHttpLogMetadata @m
    350 354   
    351  - logHttpError a b c d e f g h i j = lift $ logHttpError a b c d e f g h i j
     355 + logHttpError a b c d e f g h i j k l = lift $ logHttpError a b c d e f g h i j k l
    352 356   
    353 357   logHttpSuccess a b c d e f g h i j k l m = lift $ logHttpSuccess a b c d e f g h i j k l m
    354 358   
    skipped 3 lines
    358 362   buildExtraHttpLogMetadata a = buildExtraHttpLogMetadata @m a
    359 363   emptyExtraHttpLogMetadata = emptyExtraHttpLogMetadata @m
    360 364   
    361  - logHttpError a b c d e f g h i j = lift $ logHttpError a b c d e f g h i j
     365 + logHttpError a b c d e f g h i j k l = lift $ logHttpError a b c d e f g h i j k l
    362 366   
    363 367   logHttpSuccess a b c d e f g h i j k l m = lift $ logHttpSuccess a b c d e f g h i j k l m
    364 368   
    skipped 3 lines
    368 372   buildExtraHttpLogMetadata a = buildExtraHttpLogMetadata @m a
    369 373   emptyExtraHttpLogMetadata = emptyExtraHttpLogMetadata @m
    370 374   
    371  - logHttpError a b c d e f g h i j = lift $ logHttpError a b c d e f g h i j
     375 + logHttpError a b c d e f g h i j k l = lift $ logHttpError a b c d e f g h i j k l
    372 376   
    373 377   logHttpSuccess a b c d e f g h i j k l m = lift $ logHttpSuccess a b c d e f g h i j k l m
    374 378   
    375 379  -- | Log information about the HTTP request
    376 380  data HttpInfoLog = HttpInfoLog
    377  - { hlStatus :: !HTTP.Status,
    378  - hlMethod :: !Text,
    379  - hlSource :: !Wai.IpAddress,
    380  - hlPath :: !Text,
    381  - hlHttpVersion :: !HTTP.HttpVersion,
    382  - hlCompression :: !(Maybe CompressionType),
     381 + { hlStatus :: HTTP.Status,
     382 + hlMethod :: Text,
     383 + hlSource :: Wai.IpAddress,
     384 + hlPath :: Text,
     385 + hlHttpVersion :: HTTP.HttpVersion,
     386 + hlCompression :: Maybe CompressionType,
    383 387   -- | all the request headers
    384  - hlHeaders :: ![HTTP.Header]
     388 + hlHeaders :: [HTTP.Header]
    385 389   }
    386 390   deriving (Eq)
    387 391   
    388 392  instance J.ToJSON HttpInfoLog where
    389  - toJSON (HttpInfoLog st met src path hv compressTypeM _) =
     393 + toJSON (HttpInfoLog st met src path hv compressType _) =
    390 394   J.object
    391 395   [ "status" J..= HTTP.statusCode st,
    392 396   "method" J..= met,
    393 397   "ip" J..= Wai.showIPAddress src,
    394 398   "url" J..= path,
    395 399   "http_version" J..= show hv,
    396  - "content_encoding" J..= (compressionTypeToTxt <$> compressTypeM)
     400 + "content_encoding" J..= (compressionTypeToTxt <$> compressType)
    397 401   ]
    398 402   
    399 403  -- | Information about a GraphQL/Hasura metadata operation over HTTP
    400 404  data OperationLog = OperationLog
    401  - { olRequestId :: !RequestId,
    402  - olUserVars :: !(Maybe SessionVariables),
    403  - olResponseSize :: !(Maybe Int64),
     405 + { olRequestId :: RequestId,
     406 + olUserVars :: Maybe SessionVariables,
     407 + olResponseSize :: Maybe Int64,
    404 408   -- | Response size before compression
    405  - olUncompressedResponseSize :: !Int64,
     409 + olUncompressedResponseSize :: Int64,
    406 410   -- | Request IO wait time, i.e. time spent reading the full request from the socket.
    407  - olRequestReadTime :: !(Maybe Seconds),
     411 + olRequestReadTime :: Maybe Seconds,
    408 412   -- | Service time, not including request IO wait time.
    409  - olQueryExecutionTime :: !(Maybe Seconds),
    410  - olQuery :: !(Maybe J.Value),
    411  - olRawQuery :: !(Maybe Text),
    412  - olError :: !(Maybe QErr),
    413  - olRequestMode :: !RequestMode
     413 + olQueryExecutionTime :: Maybe Seconds,
     414 + olQuery :: Maybe J.Value,
     415 + olRawQuery :: Maybe Text,
     416 + olError :: Maybe QErr,
     417 + olRequestMode :: RequestMode
    414 418   }
    415 419   deriving (Eq, Generic)
    416 420   
    skipped 4 lines
    421 425  -- | @BatchOperationSuccessLog@ contains the information required for a single
    422 426  -- successful operation in a batch request for OSS. This type is a subset of the @GQLQueryOperationSuccessLog@
    423 427  data BatchOperationSuccessLog = BatchOperationSuccessLog
    424  - { _bolQuery :: !(Maybe J.Value),
    425  - _bolResponseSize :: !Int64,
    426  - _bolQueryExecutionTime :: !Seconds
     428 + { _bolQuery :: Maybe J.Value,
     429 + _bolResponseSize :: Int64,
     430 + _bolQueryExecutionTime :: Seconds
    427 431   }
    428 432   deriving (Eq, Generic)
    429 433   
    skipped 269 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Server/Middleware.hs
    1 1  module Hasura.Server.Middleware
    2 2   ( corsMiddleware,
     3 + dateHeaderMiddleware,
    3 4   )
    4 5  where
    5 6   
    6 7  import Control.Applicative
    7 8  import Data.ByteString qualified as B
    8 9  import Data.CaseInsensitive qualified as CI
     10 +import Data.IORef
    9 11  import Data.Text.Encoding qualified as TE
     12 +import Hasura.CachedTime
    10 13  import Hasura.Prelude
    11 14  import Hasura.Server.Cors
    12 15  import Hasura.Server.Utils
    skipped 61 lines
    74 77   setHeaders hdrs = mapResponseHeaders (\h -> mkRespHdrs hdrs ++ h)
    75 78   mkRespHdrs = map (\(k, v) -> (CI.mk k, v))
    76 79   
     80 +-- bypass warp's use of 'auto-update'. See #10662
     81 +dateHeaderMiddleware :: Middleware
     82 +dateHeaderMiddleware app req respond = do
     83 + (_, _, nowRFC7231) <- liftIO $ readIORef cachedRecentFormattedTimeAndZone
     84 + app req $ respond . mapResponseHeaders (("Date", nowRFC7231) :)
     85 + 
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Server/Prometheus.hs
    skipped 30 lines
    31 31   observeHistogramWithLabel,
    32 32   SubscriptionKindLabel (..),
    33 33   SubscriptionLabel (..),
    34  - DynamicSubscriptionLabel (..),
     34 + DynamicGraphqlOperationLabel (..),
    35 35   streamingSubscriptionLabel,
    36 36   liveQuerySubscriptionLabel,
    37 37   recordMetricWithLabel,
    38  - recordSubcriptionMetric,
     38 + recordSubscriptionMetric,
     39 + GraphQLRequestsLabels,
     40 + recordGraphqlOperationMetric,
    39 41   )
    40 42  where
    41 43   
    skipped 29 lines
    71 73   pmGraphQLRequestMetrics :: GraphQLRequestMetrics,
    72 74   pmEventTriggerMetrics :: EventTriggerMetrics,
    73 75   pmWebSocketBytesReceived :: Counter,
    74  - pmWebSocketBytesSent :: CounterVector DynamicSubscriptionLabel,
     76 + pmWebSocketBytesSent :: CounterVector DynamicGraphqlOperationLabel,
    75 77   pmActionBytesReceived :: Counter,
    76 78   pmActionBytesSent :: Counter,
    77 79   pmScheduledTriggerMetrics :: ScheduledTriggerMetrics,
    skipped 5 lines
    83 85   }
    84 86   
    85 87  data GraphQLRequestMetrics = GraphQLRequestMetrics
    86  - { gqlRequestsQuerySuccess :: Counter,
    87  - gqlRequestsQueryFailure :: Counter,
    88  - gqlRequestsMutationSuccess :: Counter,
    89  - gqlRequestsMutationFailure :: Counter,
    90  - gqlRequestsSubscriptionSuccess :: Counter,
    91  - gqlRequestsSubscriptionFailure :: Counter,
    92  - gqlRequestsUnknownFailure :: Counter,
     88 + { gqlRequests :: CounterVector GraphQLRequestsLabels,
    93 89   gqlExecutionTimeSecondsQuery :: Histogram,
    94 90   gqlExecutionTimeSecondsMutation :: Histogram
    95 91   }
    skipped 74 lines
    170 166   
    171 167  makeDummyGraphQLRequestMetrics :: IO GraphQLRequestMetrics
    172 168  makeDummyGraphQLRequestMetrics = do
    173  - gqlRequestsQuerySuccess <- Counter.new
    174  - gqlRequestsQueryFailure <- Counter.new
    175  - gqlRequestsMutationSuccess <- Counter.new
    176  - gqlRequestsMutationFailure <- Counter.new
    177  - gqlRequestsSubscriptionSuccess <- Counter.new
    178  - gqlRequestsSubscriptionFailure <- Counter.new
    179  - gqlRequestsUnknownFailure <- Counter.new
     169 + gqlRequests <- CounterVector.new
    180 170   gqlExecutionTimeSecondsQuery <- Histogram.new []
    181 171   gqlExecutionTimeSecondsMutation <- Histogram.new []
    182 172   pure GraphQLRequestMetrics {..}
    skipped 112 lines
    295 285   toLabels (Just (DynamicEventTriggerLabel triggerName sourceName)) = Map.fromList $ [("trigger_name", triggerNameToTxt triggerName), ("source_name", sourceNameToText sourceName)]
    296 286   
    297 287  data ResponseStatus = Success | Failed
     288 + deriving stock (Generic, Ord, Eq)
    298 289   
    299 290  -- TODO: Make this a method of a new typeclass of the metrics library
    300 291  responseStatusToLabelValue :: ResponseStatus -> Text
    skipped 34 lines
    335 326  liveQuerySubscriptionLabel :: SubscriptionKindLabel
    336 327  liveQuerySubscriptionLabel = SubscriptionKindLabel "live-query"
    337 328   
    338  -data DynamicSubscriptionLabel = DynamicSubscriptionLabel
     329 +data DynamicGraphqlOperationLabel = DynamicGraphqlOperationLabel
    339 330   { _dslParamQueryHash :: Maybe ParameterizedQueryHash,
    340 331   _dslOperationName :: Maybe OperationName
    341 332   }
    342 333   deriving stock (Generic, Ord, Eq)
    343 334   
    344  -instance ToLabels DynamicSubscriptionLabel where
    345  - toLabels (DynamicSubscriptionLabel hash opName) =
     335 +instance ToLabels DynamicGraphqlOperationLabel where
     336 + toLabels (DynamicGraphqlOperationLabel hash opName) =
    346 337   Map.fromList
    347 338   $ maybe [] (\pqh -> [("parameterized_query_hash", bsToTxt $ unParamQueryHash pqh)]) hash
    348 339   <> maybe [] (\op -> [("operation_name", G.unName $ _unOperationName op)]) opName
    349 340   
    350 341  data SubscriptionLabel = SubscriptionLabel
    351 342   { _slKind :: SubscriptionKindLabel,
    352  - _slDynamicLabels :: Maybe DynamicSubscriptionLabel
     343 + _slDynamicLabels :: Maybe DynamicGraphqlOperationLabel
    353 344   }
    354 345   deriving stock (Generic, Ord, Eq)
    355 346   
    skipped 1 lines
    357 348   toLabels (SubscriptionLabel kind Nothing) = Map.fromList $ [("subscription_kind", subscription_kind kind)]
    358 349   toLabels (SubscriptionLabel kind (Just dl)) = (Map.fromList $ [("subscription_kind", subscription_kind kind)]) <> toLabels dl
    359 350   
     351 +-- TODO: Make this a method of a new typeclass of the metrics library
     352 +opTypeToLabelValue :: Maybe G.OperationType -> Text
     353 +opTypeToLabelValue = \case
     354 + (Just G.OperationTypeQuery) -> "query"
     355 + (Just G.OperationTypeMutation) -> "mutation"
     356 + (Just G.OperationTypeSubscription) -> "subscription"
     357 + Nothing -> "unknown"
     358 + 
     359 +data GraphQLRequestsLabels = GraphQLRequestsLabels
     360 + { operation_type :: Maybe G.OperationType,
     361 + response_status :: ResponseStatus,
     362 + dynamic_label :: Maybe DynamicGraphqlOperationLabel
     363 + }
     364 + deriving stock (Generic, Ord, Eq)
     365 + 
     366 +instance ToLabels (GraphQLRequestsLabels) where
     367 + toLabels (GraphQLRequestsLabels op_type res_status dynamic_labels) =
     368 + (HashMap.fromList $ [("operation_type", opTypeToLabelValue op_type), ("response_status", responseStatusToLabelValue res_status)]) <> (fromMaybe mempty (toLabels <$> dynamic_labels))
     369 + 
    360 370  -- | Record metrics with dynamic label
    361 371  recordMetricWithLabel ::
    362 372   (MonadIO m) =>
    skipped 38 lines
    401 411   
    402 412  -- | Record a subscription metric for all the operation names present in the subscription.
    403 413  -- Use this when you want to update the same value of the metric for all the operation names.
    404  -recordSubcriptionMetric ::
     414 +recordSubscriptionMetric ::
    405 415   (MonadIO m) =>
    406 416   (IO GranularPrometheusMetricsState) ->
    407 417   -- should the metric be observed without a label when granularMetricsState is OFF
    skipped 4 lines
    412 422   -- the mertic action to perform
    413 423   (SubscriptionLabel -> IO ()) ->
    414 424   m ()
    415  -recordSubcriptionMetric getMetricState alwaysObserve operationNamesMap parameterizedQueryHash subscriptionKind metricAction = do
     425 +recordSubscriptionMetric getMetricState alwaysObserve operationNamesMap parameterizedQueryHash subscriptionKind metricAction = do
    416 426   -- if no operation names are present, then emit metric with only param query hash as dynamic label
    417 427   if (null operationNamesMap)
    418 428   then do
    419  - let promMetricGranularLabel = SubscriptionLabel subscriptionKind (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) Nothing)
     429 + let promMetricGranularLabel = SubscriptionLabel subscriptionKind (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) Nothing)
    420 430   promMetricLabel = SubscriptionLabel subscriptionKind Nothing
    421 431   recordMetricWithLabel
    422 432   getMetricState
    skipped 4 lines
    427 437   do
    428 438   let operationNames = HashMap.keys operationNamesMap
    429 439   for_ operationNames $ \opName -> do
    430  - let promMetricGranularLabel = SubscriptionLabel subscriptionKind (Just $ DynamicSubscriptionLabel (Just parameterizedQueryHash) opName)
     440 + let promMetricGranularLabel = SubscriptionLabel subscriptionKind (Just $ DynamicGraphqlOperationLabel (Just parameterizedQueryHash) opName)
    431 441   promMetricLabel = SubscriptionLabel subscriptionKind Nothing
    432 442   recordMetricWithLabel
    433 443   getMetricState
    skipped 1 lines
    435 445   (metricAction promMetricGranularLabel)
    436 446   (metricAction promMetricLabel)
    437 447   
     448 +recordGraphqlOperationMetric ::
     449 + (MonadIO m) =>
     450 + (IO GranularPrometheusMetricsState) ->
     451 + Maybe G.OperationType ->
     452 + ResponseStatus ->
     453 + Maybe OperationName ->
     454 + Maybe ParameterizedQueryHash ->
     455 + (GraphQLRequestsLabels -> IO ()) ->
     456 + m ()
     457 +recordGraphqlOperationMetric getMetricState operationType responseStatus operationName parameterizedQueryHash metricAction = do
     458 + let dynamicLabel = DynamicGraphqlOperationLabel parameterizedQueryHash operationName
     459 + promMetricGranularLabel = GraphQLRequestsLabels operationType responseStatus (Just dynamicLabel)
     460 + promMetricLabel = GraphQLRequestsLabels operationType responseStatus Nothing
     461 + recordMetricWithLabel
     462 + getMetricState
     463 + True
     464 + (metricAction promMetricGranularLabel)
     465 + (metricAction promMetricLabel)
     466 + 
  • ■ ■ ■ ■ ■ ■
    server/src-rsr/catalog_versions.txt
    skipped 197 lines
    198 198  v2.37.0-beta.1 48
    199 199  v2.36.3 48
    200 200  v2.37.0 48
     201 +v2.37.1 48
     202 +v2.38.0-beta.1 48
     203 +v2.38.0 48
    201 204   
  • ■ ■ ■ ■ ■
    server/tests-py/run.sh
    skipped 40 lines
    41 41   
    42 42  echo
    43 43  echo '*** Running tests ***'
     44 +export SQLALCHEMY_SILENCE_UBER_WARNING=1 # disable warnings about upgrading to SQLAlchemy 2.0
    44 45  pytest \
    45 46   --dist=loadscope \
    46 47   -n auto \
    skipped 4 lines
  • ■ ■ ■ ■ ■ ■
    server/tests-py/test_logging.py
    skipped 44 lines
    45 45   headers = {'x-request-id': 'successful-query-log-test'}
    46 46   if hge_ctx.hge_key:
    47 47   headers['x-hasura-admin-secret'] = hge_ctx.hge_key
    48  - resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q,
    49  - headers=headers)
     48 + resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q, headers=headers)
    50 49   assert resp.status_code == 200 and 'data' in resp.json()
    51 50   
    52 51   # make a query where JSON body parsing fails
    skipped 1 lines
    54 53   headers = {'x-request-id': 'json-parse-fail-log-test'}
    55 54   if hge_ctx.hge_key:
    56 55   headers['x-hasura-admin-secret'] = hge_ctx.hge_key
    57  - resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q,
    58  - headers=headers)
     56 + resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q, headers=headers)
    59 57   assert resp.status_code == 200 and 'errors' in resp.json()
    60 58   
    61 59   # make an unauthorized query where admin secret/access token is empty
    62 60   q = {'query': 'query { hello {code name} }'}
    63  - headers = {'x-request-id': 'unauthorized-query-test'}
    64  - resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q,
    65  - headers=headers)
     61 + headers = {'x-request-id': 'unauthorized-query-log-test'}
     62 + resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/graphql', json=q, headers=headers)
    66 63   assert resp.status_code == 200 and 'errors' in resp.json()
    67 64   
     65 + # make a successful "run SQL" query
     66 + q = {'type': 'run_sql', 'args': {'source': 'default', 'sql': 'SELECT 1 AS one'}}
     67 + headers = {'x-request-id': 'successful-run-sql-log-test'}
     68 + if hge_ctx.hge_key:
     69 + headers['x-hasura-admin-secret'] = hge_ctx.hge_key
     70 + resp = hge_ctx.http.post(hge_ctx.hge_url + '/v2/query', json=q, headers=headers)
     71 + assert resp.status_code == 200 and 'result' in resp.json()
     72 + 
     73 + # make a failed "run SQL" query
     74 + q = {'type': 'run_sql', 'args': {'source': 'default', 'sql': 'SELECT x FROM non_existent_table'}}
     75 + headers = {'x-request-id': 'failed-run-sql-log-test'}
     76 + if hge_ctx.hge_key:
     77 + headers['x-hasura-admin-secret'] = hge_ctx.hge_key
     78 + resp = hge_ctx.http.post(hge_ctx.hge_url + '/v2/query', json=q, headers=headers)
     79 + assert resp.status_code == 400
     80 + 
    68 81   # make an unauthorized metadata request where admin secret/access token is empty
    69 82   q = {
    70 83   'query': {
    skipped 8 lines
    79 92   }
    80 93   }
    81 94   }
    82  - headers = {'x-request-id': 'unauthorized-metadata-test'}
     95 + headers = {'x-request-id': 'unauthorized-metadata-log-test'}
    83 96   resp = hge_ctx.http.post(hge_ctx.hge_url + '/v1/query', json=q,
    84 97   headers=headers)
    85 98   assert resp.status_code == 401 and 'error' in resp.json()
    skipped 8 lines
    94 107   'kind' in x['detail'] and \
    95 108   x['detail']['kind'] == 'server_configuration'
    96 109   
    97  - config_logs = list(filter(_get_server_config, logs_from_requests))
     110 + config_logs = [l for l in logs_from_requests if _get_server_config(l)]
    98 111   print(config_logs)
    99 112   assert len(config_logs) == 1
    100 113   config_log = config_logs[0]
    skipped 29 lines
    130 143   return x['type'] == 'http-log'
    131 144   
    132 145   print('all logs gathered', logs_from_requests)
    133  - http_logs = list(filter(_get_http_logs, logs_from_requests))
     146 + http_logs = [l for l in logs_from_requests if _get_http_logs(l)]
    134 147   print('http logs', http_logs)
    135 148   assert len(http_logs) > 0
    136 149   for http_log in http_logs:
    skipped 6 lines
    143 156   
    144 157   operation = http_log['detail']['operation']
    145 158   assert 'request_id' in operation
     159 + if operation['request_id'] in ['successful-query-log-test', 'successful-run-sql-log-test', 'failed-run-sql-log-test']:
     160 + assert 'query_execution_time' in operation
    146 161   if operation['request_id'] == 'successful-query-log-test':
    147  - assert 'query_execution_time' in operation
    148 162   assert 'user_vars' in operation
    149 163   # we should see the `query` field in successful operations
    150 164   assert 'query' in operation
    skipped 5 lines
    156 170   def _get_query_logs(x):
    157 171   return x['type'] == 'query-log'
    158 172   
    159  - query_logs = list(filter(_get_query_logs, logs_from_requests))
     173 + query_logs = [l for l in logs_from_requests if _get_query_logs(l)]
    160 174   assert len(query_logs) > 0
    161 175   onelog = query_logs[0]['detail']
    162 176   assert 'request_id' in onelog
    skipped 2 lines
    165 179   assert 'generated_sql' in onelog
    166 180   
    167 181   def test_http_parse_failed_log(self, logs_from_requests):
    168  - def _get_parse_failed_logs(x):
     182 + def _get_logs(x):
    169 183   return x['type'] == 'http-log' and \
    170 184   x['detail']['operation']['request_id'] == 'json-parse-fail-log-test'
    171 185   
    172  - http_logs = list(filter(_get_parse_failed_logs, logs_from_requests))
     186 + http_logs = [l for l in logs_from_requests if _get_logs(l)]
    173 187   print('parse failed logs', http_logs)
    174 188   assert len(http_logs) > 0
    175 189   print(http_logs[0])
    skipped 1 lines
    177 191   assert http_logs[0]['detail']['operation']['error']['code'] == 'parse-failed'
    178 192   
    179 193   def test_http_unauthorized_query(self, logs_from_requests):
    180  - def _get_failed_logs(x):
     194 + def _get_logs(x):
    181 195   return x['type'] == 'http-log' and \
    182  - x['detail']['operation']['request_id'] == 'unauthorized-query-test'
     196 + x['detail']['operation']['request_id'] == 'unauthorized-query-log-test'
    183 197   
    184  - http_logs = list(filter(_get_failed_logs, logs_from_requests))
     198 + http_logs = [l for l in logs_from_requests if _get_logs(l)]
    185 199   print('unauthorized failed logs', http_logs)
    186 200   assert len(http_logs) > 0
    187 201   print(http_logs[0])
    skipped 2 lines
    190 204   assert http_logs[0]['detail']['operation'].get('query') is None
    191 205   assert http_logs[0]['detail']['operation']['raw_query'] is not None
    192 206   
     207 + def test_successful_run_sql(self, logs_from_requests):
     208 + def _get_logs(x):
     209 + return x['type'] == 'http-log' and \
     210 + x['detail']['operation']['request_id'] == 'successful-run-sql-log-test'
     211 + 
     212 + http_logs = [l for l in logs_from_requests if _get_logs(l)]
     213 + print('successful run SQL logs', http_logs)
     214 + assert len(http_logs) > 0
     215 + print(http_logs[0])
     216 + assert http_logs[0]['detail']['operation']['query']['type'] == 'run_sql'
     217 + 
     218 + def test_failed_run_sql(self, logs_from_requests):
     219 + def _get_logs(x):
     220 + return x['type'] == 'http-log' and \
     221 + x['detail']['operation']['request_id'] == 'failed-run-sql-log-test'
     222 + 
     223 + http_logs = [l for l in logs_from_requests if _get_logs(l)]
     224 + print('failed run SQL logs', http_logs)
     225 + assert len(http_logs) > 0
     226 + print(http_logs[0])
     227 + assert http_logs[0]['detail']['operation']['error']['code'] == 'postgres-error'
     228 + assert http_logs[0]['detail']['operation']['query']['type'] == 'run_sql'
     229 + 
    193 230   def test_http_unauthorized_metadata(self, logs_from_requests):
    194  - def _get_failed_logs(x):
     231 + def _get_logs(x):
    195 232   return x['type'] == 'http-log' and \
    196  - x['detail']['operation']['request_id'] == 'unauthorized-metadata-test'
     233 + x['detail']['operation']['request_id'] == 'unauthorized-metadata-log-test'
    197 234   
    198  - http_logs = list(filter(_get_failed_logs, logs_from_requests))
     235 + http_logs = [l for l in logs_from_requests if _get_logs(l)]
    199 236   print('unauthorized failed logs', http_logs)
    200 237   assert len(http_logs) > 0
    201 238   print(http_logs[0])
    skipped 117 lines
Please wait...
Page is in error, reload to recover