Projects STRLCPY graphql-engine Commits a8630db2
🤬
  • Merge branch 'main' into stable

    GitOrigin-RevId: 206571c28962a2e1fda5d3bfda9fd9754bb3bd0f
  • Loading...
  • rikinsk committed with hasura-bot 4 months ago
    a8630db2
    1 parent 4f911547
Showing first 200 files as there are too many
  • ■ ■ ■ ■ ■ ■
    .circleci/test-server.sh
    skipped 702 lines
    703 703   kill_hge_servers
    704 704   ;;
    705 705   
     706 +remote-schema-prioritize-data)
     707 + echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH REMOTE SCHEMA PRIORITIZE DATA/ERRORS ########>\n"
     708 + export HASURA_GRAPHQL_ADMIN_SECRET="HGE$RANDOM$RANDOM"
     709 + 
     710 + run_hge_with_args serve
     711 + wait_for_port 8080
     712 + 
     713 + pytest "${PYTEST_COMMON_ARGS[@]}" \
     714 + test_remote_schema_prioritize_none.py
     715 + 
     716 + kill_hge_servers
     717 + 
     718 + export HASURA_GRAPHQL_REMOTE_SCHEMA_PRIORITIZE_DATA=true
     719 + 
     720 + run_hge_with_args serve
     721 + wait_for_port 8080
     722 + 
     723 + pytest "${PYTEST_COMMON_ARGS[@]}" \
     724 + test_remote_schema_prioritize_data.py
     725 + 
     726 + unset HASURA_GRAPHQL_REMOTE_SCHEMA_PRIORITIZE_DATA
     727 + 
     728 + kill_hge_servers
     729 + 
     730 + ;;
     731 + 
    706 732  function-permissions)
    707 733   echo -e "\n$(time_elapsed): <########## TEST GRAPHQL-ENGINE WITH FUNCTION PERMISSIONS ENABLED ########>\n"
    708 734   export HASURA_GRAPHQL_INFER_FUNCTION_PERMISSIONS=false
    skipped 712 lines
  • ■ ■ ■ ■
    .ghcversion
    1  -9.4.5
     1 +9.6.4
    2 2   
  • ■ ■ ■ ■ ■ ■
    .github/ISSUE_TEMPLATE/01_bug_report.md
    1 1  ---
    2  -name: �� Bug report
    3  -about: If something isn't working ��
     2 +name: �� Bug report (V2)
     3 +about: If something isn't working for Hasura, for version 2.x only ��
    4 4  labels: "k/bug"
    5 5  ---
    6 6   
    skipped 48 lines
  • ■ ■ ■ ■ ■ ■
    .github/ISSUE_TEMPLATE/02_feature_request.md
    1 1  ---
    2  -name: �� Feature request
    3  -about: Suggest an idea for improving Hasura
     2 +name: �� Feature request (V2)
     3 +about: Suggest an idea for improving Hasura, for version 2.x only
    4 4  labels: "k/enhancement"
    5 5  ---
    6 6   
    skipped 17 lines
  • ■ ■ ■ ■ ■ ■
    .github/ISSUE_TEMPLATE/03_bug_report_v3.md
     1 +---
     2 +name: 🐜 Bug report (V3)
     3 +about: If something isn't working for Hasura DDN or V3 engine 🔧
     4 +labels: "k/v3-bug"
     5 +---
     6 + 
     7 +### Component
     8 + 
     9 +<!-- Mention which component the issue pertains to. Kindly also add corresponding label to the issue.
     10 + 
     11 +Console -> c/v3-console
     12 +VSCode Extension -> c/v3-lsp
     13 +CLI -> c/v3-cli
     14 +Graphql Engine (for issues with graphql runtime; Metadata specifications) -> c/v3-engine
     15 +Postgres (for issues that clearly relates to Postgres SQL execution) -> c/v3-ndc-postgres
     16 +Other connectors -> c/v3-ndc-hub or c/v3-ndc-xxxx where xxxx is name of connector
     17 + 
     18 +-->
     19 + 
     20 +### What is the current behaviour?
     21 + 
     22 +<!--
     23 + Provide a clear description of what is the current behaviour.
     24 +-->
     25 + 
     26 +### What is the expected behaviour?
     27 + 
     28 +<!--
     29 + Provide a clear description of what you want to happen.
     30 +-->
     31 + 
     32 +### How to reproduce the issue?
     33 + 
     34 +1.
     35 +2.
     36 +3.
     37 + 
     38 +### Screenshots or Screencast
     39 + 
     40 +<!--
     41 + Providing relevant Screenshots/ Screencasts would help us to debug the issue quickly.
     42 +-->
     43 + 
     44 +### Please provide any traces or logs that could help here.
     45 + 
     46 +<!-- Provide your answer here. -->
     47 + 
     48 +### Any possible solutions/workarounds you're aware of?
     49 + 
     50 +<!-- Provide your answer here. -->
     51 + 
     52 +### Keywords
     53 + 
     54 +<!--
     55 + What keywords did you use when trying to find an existing bug report?
     56 + List them here so people in the future can find this one more easily.
     57 +-->
     58 + 
  • ■ ■ ■ ■ ■ ■
    .github/ISSUE_TEMPLATE/04_feature_request_v3.md
     1 +---
     2 +name: 🚀 Feature request (V3)
     3 +about: Suggest an idea for improving Hasura V3, request for new connectors, plugins
     4 +labels: "k/enhancement"
     5 +---
     6 + 
     7 +### Component
     8 + 
     9 +<!-- Mention which component this feature request relates to the most. Kindly also add corresponding label to the issue.
     10 + 
     11 +Console -> c/v3-console
     12 +VSCode Extension -> c/v3-lsp
     13 +CLI -> c/v3-cli
     14 +Graphql Engine (for features that relates to graphql features; metadata modeling) -> c/v3-engine
     15 +Postgres (Postgres specific features) -> c/v3-ndc-postgres
     16 +Connectors (Request for a new connector; enhancement to connectors other than Postgres-> c/v3-ndc-hub or c/v3-ndc-xxxx where xxxx is name of connector
     17 + 
     18 +-->
     19 + 
     20 +### Is your proposal related to a problem?
     21 + 
     22 +<!--
     23 + Provide a clear and concise description of what the problem is.
     24 + For example, "I'm always frustrated when..."
     25 +-->
     26 + 
     27 +### Describe the solution you'd like
     28 + 
     29 +<!--
     30 + Provide a clear and concise description of what you want to happen.
     31 +-->
     32 + 
     33 +### Describe alternatives you've considered
     34 + 
     35 +<!-- Provide your answer here. -->
     36 + 
  • ■ ■ ■ ■ ■ ■
    CONTRIBUTING.md
    1  -# Contributing to Hasura graphql-engine
     1 +# Contributing to Hasura GraphQL Engine
    2 2   
    3  -_First_: if you feel insecure about how to start contributing, feel free to ask us on our
    4  -[Discord channel](https://discordapp.com/invite/hasura) in the #contrib channel. You can also just go ahead with your
    5  -contribution and we'll give you feedback. Don't worry - the worst that can happen is that you'll be politely asked to
     3 +_First_: if you feel insecure about how to start contributing, either to V2 or V3, feel free to ask us on our
     4 +[Discord](https://discordapp.com/invite/hasura) in the `#contrib` channel. You can also just go ahead with your
     5 +contribution, and we'll give you feedback. Don't worry - the worst that can happen is that you'll be politely asked to
    6 6  change something. We appreciate any contributions, and we don't want a wall of rules to stand in the way of that.
    7 7   
    8 8  However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This
    skipped 26 lines
    35 35   
    36 36  ## 2. Repo overview
    37 37   
    38  -[hasura/graphql-engine](https://github.com/hasura/graphql-engine) is a mono-repo consisting of 3 components. Each has
    39  -their own contributing guides:
     38 +[hasura/graphql-engine](https://github.com/hasura/graphql-engine) is a mono-repo for both the open source V2 and V3
     39 +Hasura versions.
     40 + 
     41 +### V2
     42 + 
     43 +This V2 portion consists of 3 components and each has their own contributing guide:
    40 44   
    41 45  1. [Server (Haskell)](server/CONTRIBUTING.md)
    42 46   
    skipped 8 lines
    51 55  [here](https://cla-assistant.io/hasura/graphql-engine) before (or after) the pull request has been submitted. A bot will
    52 56  prompt contributors to sign the CLA via a pull request comment, if necessary.
    53 57   
     58 +### V3
     59 + 
     60 +The V3 portion is the V3 engine exclusively, the heart of Hasura, which is written in Rust.
     61 + 
     62 +1. [V3 Engine (Rust)](v3/CONTRIBUTING.md)
     63 + 
     64 +Check out the [V3 README here](/v3/README.md).
     65 + 
    54 66  <a name="first-timers"></a>
    55 67   
    56 68  ## 3. First time contributors welcome!
    57 69   
    58  -We appreciate first time contributors and we are happy to assist you in getting started. In case of questions, just
     70 +We appreciate first time contributors, and we are happy to assist you in getting started. In case of questions, just
    59 71  reach out to us!
    60 72   
    61 73  You find all issues suitable for first time contributors
    skipped 6 lines
    68 80  Of course, we appreciate contributions to all components of Hasura. However, we have identified three areas that are
    69 81  particularly suitable for open source contributions.
    70 82   
    71  -### Docs
     83 +### V2 Docs
    72 84   
    73 85  Our goal is to keep our docs comprehensive and updated. If you would like to help us in doing so, we are grateful for
    74 86  any kind of contribution:
    skipped 6 lines
    81 93   
    82 94  The contributing guide for docs can be found at [docs/CONTRIBUTING.md](docs/CONTRIBUTING.md).
    83 95   
    84  -### Community content
     96 +### V2 Community content
    85 97   
    86 98  Since we launched our [learn page](https://hasura.io/learn/), we are happy about contributions:
    87 99   
    skipped 23 lines
    111 123   
    112 124  Feel free to submit a pull request if you have something to add even if it's not related to anything mentioned above.
    113 125   
    114  -### Hasura CLI
     126 +### V2 CLI
    115 127   
    116 128  We have some issues on the CLI that are suitable for open source contributions. If you know Go or if you would like to
    117 129  learn it by doing, check out the following
    skipped 57 lines
  • ■ ■ ■ ■ ■ ■
    README.md
    1  -# Hasura GraphQL Engine
    2  - 
    3  -[![Latest release](https://img.shields.io/github/v/release/hasura/graphql-engine)](https://github.com/hasura/graphql-engine/releases/latest)
    4  -<a href="https://hasura.io/"><img src="assets/brand/hasura_logo_primary_lightbg.svg" align="right" width="200" ></a>
    5  -[![Docs](https://img.shields.io/badge/docs-v2.x-brightgreen.svg?style=flat)](https://hasura.io/docs)
    6  - 
    7  -<a href="https://discord.gg/vBPpJkS"><img src="https://img.shields.io/badge/chat-discord-brightgreen.svg?logo=discord&style=flat"></a>
    8  -<a href="https://twitter.com/intent/follow?screen_name=HasuraHQ"><img src="https://img.shields.io/badge/Follow-HasuraHQ-blue.svg?style=flat&logo=twitter"></a>
    9  -<a href="https://hasura.io/newsletter/"><img src="https://img.shields.io/badge/newsletter-subscribe-yellow.svg?style=flat"></a>
    10  - 
    11  -Hasura is an open-source product that accelerates API development by 10x by giving you [GraphQL](https://hasura.io/graphql/) or REST APIs with built-in authorization on your data, instantly.
    12  - 
    13  -Read more at [hasura.io](https://hasura.io) and the [docs](https://hasura.io/docs/).
    14  - 
    15  -------------------
    16  - 
    17  -![Hasura GraphQL Engine Demo](assets/demo.gif)
    18  - 
    19  -------------------
    20  - 
    21  -![Hasura GraphQL Engine Realtime Demo](assets/realtime.gif)
    22  - 
    23  --------------------
    24  - 
    25  -## Features
    26  - 
    27  -* **Make powerful queries**: Built-in filtering, pagination, pattern search, bulk insert, update, delete mutations
    28  -* **Works with existing, live databases**: Point it to an existing database to instantly get a ready-to-use GraphQL API
    29  -* **Realtime**: Convert any GraphQL query to a live query by using subscriptions
    30  -* **Merge remote schemas**: Access custom GraphQL schemas for business logic via a single GraphQL Engine endpoint. [**Read more**](remote-schemas.md).
    31  -* **Extend with Actions**: Write REST APIs to extend Hasura’s schema with custom business logic.
    32  -* **Trigger webhooks or serverless functions**: On Postgres insert/update/delete events ([read more](event-triggers.md))
    33  -* **Scheduled Triggers**: Execute custom business logic at specific points in time using a cron config or a one-off event.
    34  -* **Fine-grained access control**: Dynamic access control that integrates with your auth system (eg: auth0, firebase-auth)
    35  -* **Admin UI & Migrations**: Admin UI & Rails-inspired schema migrations
    36  -* **Supported Databases**: Supports PostgreSQL (and its flavors), MS SQL Server and Big Query. Support for more [databases](https://hasura.io/graphql/database/) coming soon.
    37  - 
    38  -Read more at [hasura.io](https://hasura.io) and the [docs](https://hasura.io/docs/).
    39  - 
    40  -## Table of contents
    41  -<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
    42  -**Table of Contents**
    43  - 
    44  -- [Quickstart:](#quickstart)
    45  - - [One-click deployment on Hasura Cloud](#one-click-deployment-on-hasura-cloud)
    46  - - [Other one-click deployment options](#other-one-click-deployment-options)
    47  - - [Other deployment methods](#other-deployment-methods)
    48  -- [Architecture](#architecture)
    49  -- [Client-side tooling](#client-side-tooling)
    50  -- [Add business logic](#add-business-logic)
    51  - - [Remote schemas](#remote-schemas)
    52  - - [Trigger webhooks on database events](#trigger-webhooks-on-database-events)
    53  -- [Demos](#demos)
    54  - - [Realtime applications](#realtime-applications)
    55  - - [Videos](#videos)
    56  -- [Support & Troubleshooting](#support--troubleshooting)
    57  -- [Stay up to date](#stay-up-to-date)
    58  -- [Contributing](#contributing)
    59  -- [Brand assets](#brand-assets)
    60  -- [License](#license)
    61  -- [Translations](#translations)
    62  -
    63  -<!-- markdown-toc end -->
     1 +![Hasura logo](./assets/hasura_logo_primary_darkbg.png#gh-dark-mode-only)
     2 +![Hasura logo](./assets/hasura_logo_primary_lightbg.png#gh-light-mode-only)
    64 3   
    65  -## Quickstart:
     4 +# Hasura GraphQL Engine
    66 5   
    67  -### One-click deployment on Hasura Cloud
     6 +The Hasura engine is an open source project which supercharges the building of modern applications by providing access
     7 +to data via a single, composable, secure API endpoint.
    68 8   
    69  -The fastest and easiest way to try Hasura out is via [Hasura Cloud](https://hasura.io/docs/latest/graphql/cloud/getting-started/index.html).
     9 +<a href="https://hasura.io/"><img src="https://img.shields.io/badge/🏠_Visit-Hasura_Homepage-blue.svg?style=flat"></a>
     10 +<a href="https://hasura.io/community/"><img src="https://img.shields.io/badge/😊_Join-Community-blue.svg?style=flat"></a>
    70 11   
    71  -1. Click on the following button to deploy GraphQL engine on Hasura Cloud including Postgres add-on or using an existing Postgres database:
     12 +## Hasura V2
    72 13   
    73  - [![Deploy to Hasura Cloud](https://graphql-engine-cdn.hasura.io/img/deploy_to_hasura.png)](https://cloud.hasura.io/signup)
     14 +[![Latest release](https://img.shields.io/github/v/release/hasura/graphql-engine)](https://github.com/hasura/graphql-engine/releases/latest)
     15 +[![Docs](https://img.shields.io/badge/docs-v2.x-yellow.svg?style=flat)](https://hasura.io/docs)
    74 16   
    75  -2. Open the Hasura console
     17 +Hasura V2 is the current stable version of the Hasura GraphQL Engine. Recommended for production use. Please find more
     18 +detailed information about the V2 Hasura Graphql Engine in the `v2` folder and this [README](V2-README.md).
    76 19   
    77  - Click on the button "Launch console" to open the Hasura console.
     20 +## Hasura V3
    78 21   
    79  -3. Make your first GraphQL query
     22 +[//]: # (TODO update version badge)
     23 +[//]: # ([![Latest release]&#40;https://img.shields.io/github/v/release/hasura/graphql-engine&#41;]&#40;https://github.com/hasura/graphql-engine/releases/latest&#41;)
     24 +[![Docs](https://img.shields.io/badge/docs-v3.x.alpha-yellow.svg?style=flat)](https://hasura.io/docs/3.0/)
    80 25   
    81  - Create a table and instantly run your first query. Follow this [simple guide](https://hasura.io/docs/latest/graphql/core/getting-started/first-graphql-query.html).
     26 +The future of data delivery. Currently in `alpha`. [Read more](https://hasura.io/ddn)
    82 27   
    83  -### Other one-click deployment options
     28 +The Hasura V3 engine code, which powers Hasura DDN, is in the `V3` folder of this repo. You can find more detailed
     29 +information about the Hasura DDN Graphql Engine in this [README](/v3/README.md).
    84 30   
    85  -Check out the instructions for the following one-click deployment options:
     31 +The Hasura DDN architecture includes Data Connectors to connect to data sources. All Hasura connectors are also
     32 +available completely open source. Check out the [Connector Hub](https://hasura.io/connectors/) which lists all
     33 +available connectors.
    86 34   
    87  -| **Infra provider** | **One-click link** | **Additional information** |
    88  -|:------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------:|
    89  -| Heroku | [![Deploy to Heroku](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/hasura/graphql-engine-heroku) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/heroku-one-click.html) |
    90  -| DigitalOcean | [![Deploy to DigitalOcean](https://graphql-engine-cdn.hasura.io/img/create_hasura_droplet_200px.png)](https://marketplace.digitalocean.com/apps/hasura?action=deploy&refcode=c4d9092d2c48&utm_source=hasura&utm_campaign=readme) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/digital-ocean-one-click.html#hasura-graphql-engine-digitalocean-one-click-app) |
    91  -| Azure | [![Deploy to Azure](http://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container-with-pg%2fazuredeploy.json) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/azure-container-instances-postgres.html) |
    92  -| Render | [![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/render-examples/hasura-graphql) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/render-one-click.html) |
     35 +## Cloning repository
    93 36   
    94  -> Note: The Hasura GraphQL Engine collects anonymous telemetry to understand usage and provide the best experience. Read more [here](https://hasura.io/docs/latest/policies/telemetry/) on what data is collected and the procedure to opt out.
     37 +This repository is a large and active mono-repo containing many parts of the Hasura ecosystem and a long git
     38 +history, that can make the first time cloning of the repository slow and consume a lot of disk space. We recommend
     39 +following if you are facing cloning issues.
    95 40   
    96  -### Other deployment methods
     41 +### Shallow clone
    97 42   
    98  -For Docker-based deployment and advanced configuration options, see [deployment
    99  -guides](https://hasura.io/docs/latest/graphql/core/getting-started/index.html) or
    100  -[install manifests](install-manifests).
     43 +This will only clone the latest commit and ignore all historical commits.
    101 44   
    102  -## Architecture
     45 +```
     46 +git clone https://github.com/hasura/graphql-engine.git --depth 1
     47 +```
    103 48   
    104  -The Hasura GraphQL Engine fronts a Postgres database instance and can accept GraphQL requests from your client apps. It can be configured to work with your existing auth system and can handle access control using field-level rules with dynamic variables from your auth system.
    105  - 
    106  -You can also merge remote GraphQL schemas and provide a unified GraphQL API.
    107  - 
    108  -![Hasura GraphQL Engine architecture](assets/hasura-arch.svg)
    109  - 
    110  -## Client-side tooling
    111  - 
    112  -Hasura works with any GraphQL client. See [awesome-graphql](https://github.com/chentsulin/awesome-graphql) for a list of clients. Our [frontend tutorial series](https://hasura.io/learn/#frontend-tutorial) also have integrations with GraphQL clients for different frameworks.
    113  - 
    114  -## Add business logic
    115  - 
    116  -GraphQL Engine provides easy-to-reason, scalable and performant methods for adding custom business logic to your backend:
    117  - 
    118  -### Remote schemas
    119  - 
    120  -Add custom resolvers in a remote schema in addition to Hasura's database-based GraphQL schema. Ideal for use-cases like implementing a payment API, or querying data that is not in your database - [read more](remote-schemas.md).
    121  - 
    122  -### Actions
    123  - 
    124  -Actions are a way to extend Hasura’s schema with custom business logic using custom queries and mutations. Actions can be added to Hasura to handle various use cases such as data validation, data enrichment from external sources and any other complex business logic - [read more](https://hasura.io/docs/latest/graphql/core/actions/index.html)
    125  - 
    126  -### Trigger webhooks on database events
    127  - 
    128  -Add asynchronous business logic that is triggered based on database events.
    129  -Ideal for notifications, data-pipelines from Postgres or asynchronous
    130  -processing - [read more](event-triggers.md).
    131  - 
    132  -### Derived data or data transformations
    133  - 
    134  -Transform data in Postgres or run business logic on it to derive another dataset that can be queried using GraphQL Engine - [read more](https://hasura.io/docs/latest/graphql/core/queries/derived-data.html).
    135  - 
    136  -## Demos
    137  - 
    138  -Check out all the example applications in the [hasura/sample-apps](https://github.com/hasura/sample-apps/tree/main) repository.
    139  - 
    140  -### Realtime applications
    141  - 
    142  -- Group Chat application built with React, includes a typing indicator, online users & new
    143  - message notifications.
    144  - - [Try it out](https://realtime-chat.demo.hasura.io/)
    145  - - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-chat)
    146  - 
    147  -- Live location tracking app that shows a running vehicle changing the current GPS
    148  - coordinates moving on a map.
    149  - - [Try it out](https://realtime-location-tracking.demo.hasura.io/)
    150  - - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-location-tracking)
    151  - 
    152  -- A real-time dashboard for data aggregations on continuously changing data.
    153  - - [Try it out](https://realtime-poll.demo.hasura.io/)
    154  - - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-poll)
    155  - 
    156  -### Videos
    157  - 
    158  -* [Add GraphQL to a self-hosted GitLab instance](https://www.youtube.com/watch?v=a2AhxKqd82Q) (*3:44 mins*)
    159  -* [Todo app with Auth0 and GraphQL backend](https://www.youtube.com/watch?v=15ITBYnccgc) (*4:00 mins*)
    160  -* [GraphQL on GitLab integrated with GitLab auth](https://www.youtube.com/watch?v=m1ChRhRLq7o) (*4:05 mins*)
    161  -* [Dashboard for 10million rides with geo-location (PostGIS, Timescale)](https://www.youtube.com/watch?v=tsY573yyGWA) (*3:06 mins*)
     49 +### Git checkout with only Hasura V3 engine code
     50 +```
     51 +git clone --no-checkout https://github.com/hasura/graphql-engine.git --depth 1
     52 +cd graphql-engine
     53 +git sparse-checkout init --cone
     54 +git sparse-checkout set v3
     55 +git checkout @
     56 +```
     57 +This checkouts the top level files and only the `v3` folder which contains the Hasura V3 Engine code.
    162 58   
    163 59  ## Support & Troubleshooting
    164 60   
    165  -The documentation and community will help you troubleshoot most issues. If you have encountered a bug or need to get in touch with us, you can contact us using one of the following channels:
     61 +To troubleshoot most issues, check out our documentation and community resources. If you have encountered a bug or need
     62 +to get in touch with us, you can contact us using one of the following channels:
    166 63   
     64 +* Hasura DDN documentation: [DDN docs](https://hasura.io/docs/3.0/)
     65 +* Hasura V2 documentation: [V2 docs](https://hasura.io/docs/)
    167 66  * Support & feedback: [Discord](https://discord.gg/hasura)
    168 67  * Issue & bug tracking: [GitHub issues](https://github.com/hasura/graphql-engine/issues)
    169 68  * Follow product updates: [@HasuraHQ](https://twitter.com/hasurahq)
    170 69  * Talk to us on our [website chat](https://hasura.io)
    171 70   
    172  -We are committed to fostering an open and welcoming environment in the community. Please see the [Code of Conduct](code-of-conduct.md).
     71 +## Code of Conduct
     72 + 
     73 +We are committed to fostering an open and welcoming environment in the community. Please see the
     74 +[Code of Conduct](code-of-conduct.md).
     75 + 
     76 +## Security
    173 77   
    174 78  If you want to report a security issue, please [read this](SECURITY.md).
    175 79   
    176 80  ## Stay up to date
    177 81   
    178  -We release new features every month. Sign up for our newsletter by using the link below. We send newsletters only once a month.
    179  -[https://hasura.io/newsletter/](https://hasura.io/newsletter/)
     82 +Join our communities to stay up to date on announcements, events, product updates, and technical blogs.
     83 +[https://hasura.io/community/](https://hasura.io/community/)
    180 84   
    181 85  ## Contributing
    182 86   
    skipped 1 lines
    184 88   
    185 89  ## Brand assets
    186 90   
    187  -Hasura brand assets (logos, the Hasura mascot, powered by badges etc.) can be
    188  -found in the [assets/brand](assets/brand) folder. Feel free to use them in your
    189  -application/website etc. We'd be thrilled if you add the "Powered by Hasura"
    190  -badge to your applications built using Hasura. ❤️
     91 +Hasura brand assets (logos, the Hasura mascot, powered by badges etc.) can be found in the
     92 +[v2/assets/brand](assets/brand) folder. Feel free to use them in your application/website etc. We'd be thrilled if you
     93 +add the "Powered by Hasura" badge to your applications built using Hasura. ❤️
    191 94   
    192  -<div style="display: flex;">
    193  - <img src="assets/brand/powered_by_hasura_primary_darkbg.svg" width="150px"/>
    194  - <img src="assets/brand/powered_by_hasura_primary_lightbg.svg" width="150px"/>
    195  -</div>
     95 +## Licenses
    196 96   
    197  -```html
    198  -<!-- For light backgrounds -->
    199  -<a href="https://hasura.io">
    200  - <img width="150px" src="https://graphql-engine-cdn.hasura.io/img/powered_by_hasura_primary_darkbg.svg" />
    201  -</a>
     97 +### V2
    202 98   
    203  -<!-- For dark backgrounds -->
    204  -<a href="https://hasura.io">
    205  - <img width="150px" src="https://graphql-engine-cdn.hasura.io/img/powered_by_hasura_primary_lightbg.svg" />
    206  -</a>
    207  -```
     99 +The V2 core GraphQL Engine is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) (Apache-2.0).
    208 100   
    209  -## License
     101 +All **other contents** in the v2 folder (except those in [`server`](v2/server), [`cli`](v2/cli) and
     102 +[`console`](v2/console) directories) are available under the [MIT License](LICENSE-community).
     103 +This includes everything in the [`docs`](v2/docs) and [`community`](v2/community)
     104 +directories.
    210 105   
    211  -The core GraphQL Engine is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) (Apache-2.0).
     106 +### V3
     107 +The [Native Data Connectors](https://github.com/hasura/ndc-hub) are available under
     108 +the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
    212 109   
    213  -All **other contents** (except those in [`server`](server), [`cli`](cli) and
    214  -[`console`](console) directories) are available under the [MIT License](LICENSE-community).
    215  -This includes everything in the [`docs`](docs) and [`community`](community)
    216  -directories.
     110 +The core [V3 GraphQL Engine](v3/) is intended to be licensed under Apache 2.0. However, due to certain MPL/GPL dependencies, we are restricted. Hasura intends to update these dependencies soon and publish a long term license under which this code gets available.
    217 111   
    218  -## Translations
    219 112   
    220  -This readme is available in the following translations:
    221 113   
    222  -- [Japanese :jp:](translations/README.japanese.md) (:pray: [@moksahero](https://github.com/moksahero))
    223  -- [French :fr:](translations/README.french.md) (:pray: [@l0ck3](https://github.com/l0ck3))
    224  -- [Bosnian :bosnia_herzegovina:](translations/README.bosnian.md) (:pray: [@hajro92](https://github.com/hajro92))
    225  -- [Russian :ru:](translations/README.russian.md) (:pray: [@highflyer910](https://github.com/highflyer910))
    226  -- [Greek 🇬🇷](translations/README.greek.md) (:pray: [@MIP2000](https://github.com/MIP2000))
    227  -- [Spanish 🇲🇽](/translations/README.mx_spanish.md)(:pray: [@ferdox2](https://github.com/ferdox2))
    228  -- [Indonesian :indonesia:](translations/README.indonesian.md) (:pray: [@anwari666](https://github.com/anwari666))
    229  -- [Brazilian Portuguese :brazil:](translations/README.portuguese_br.md) (:pray: [@rubensmp](https://github.com/rubensmp))
    230  -- [German 🇩🇪](translations/README.german.md) (:pray: [@FynnGrandke](https://github.com/FynnGrandke))
    231  -- [Chinese :cn:](translations/README.chinese.md) (:pray: [@jagreetdg](https://github.com/jagreetdg) & [@johnbanq](https://github.com/johnbanq))
    232  -- [Turkish :tr:](translations/README.turkish.md) (:pray: [@berat](https://github.com/berat))
    233  -- [Korean :kr:](translations/README.korean.md) (:pray: [@라스크](https://github.com/laskdjlaskdj12))
    234  -- [Italian :it:](translations/README.italian.md) (:pray: [@befire](https://github.com/francesca-belfiore))
    235  - 
    236  -Translations for other files can be found [here](translations).
    237 114   
  • ■ ■ ■ ■ ■ ■
    V2-README.md
     1 +# Hasura GraphQL Engine
     2 + 
     3 +[![Latest release](https://img.shields.io/github/v/release/hasura/graphql-engine)](https://github.com/hasura/graphql-engine/releases/latest)
     4 +<a href="https://hasura.io/"><img src="assets/brand/hasura_logo_primary_lightbg.svg" align="right" width="200" ></a>
     5 +[![Docs](https://img.shields.io/badge/docs-v2.x-brightgreen.svg?style=flat)](https://hasura.io/docs)
     6 + 
     7 +<a href="https://discord.gg/vBPpJkS"><img src="https://img.shields.io/badge/chat-discord-brightgreen.svg?logo=discord&style=flat"></a>
     8 +<a href="https://twitter.com/intent/follow?screen_name=HasuraHQ"><img src="https://img.shields.io/badge/Follow-HasuraHQ-blue.svg?style=flat&logo=twitter"></a>
     9 +<a href="https://hasura.io/newsletter/"><img src="https://img.shields.io/badge/newsletter-subscribe-yellow.svg?style=flat"></a>
     10 + 
     11 +Hasura is an open-source product that accelerates API development by 10x by giving you [GraphQL](https://hasura.io/graphql/) or REST APIs with built-in authorization on your data, instantly.
     12 + 
     13 +Read more at [hasura.io](https://hasura.io) and the [docs](https://hasura.io/docs/).
     14 + 
     15 +------------------
     16 + 
     17 +![Hasura GraphQL Engine Demo](assets/demo.gif)
     18 + 
     19 +------------------
     20 + 
     21 +![Hasura GraphQL Engine Realtime Demo](assets/realtime.gif)
     22 + 
     23 +-------------------
     24 + 
     25 +## Features
     26 + 
     27 +* **Make powerful queries**: Built-in filtering, pagination, pattern search, bulk insert, update, delete mutations
     28 +* **Works with existing, live databases**: Point it to an existing database to instantly get a ready-to-use GraphQL API
     29 +* **Realtime**: Convert any GraphQL query to a live query by using subscriptions
     30 +* **Merge remote schemas**: Access custom GraphQL schemas for business logic via a single GraphQL Engine endpoint. [**Read more**](remote-schemas.md).
     31 +* **Extend with Actions**: Write REST APIs to extend Hasura’s schema with custom business logic.
     32 +* **Trigger webhooks or serverless functions**: On Postgres insert/update/delete events ([read more](event-triggers.md))
     33 +* **Scheduled Triggers**: Execute custom business logic at specific points in time using a cron config or a one-off event.
     34 +* **Fine-grained access control**: Dynamic access control that integrates with your auth system (eg: auth0, firebase-auth)
     35 +* **Admin UI & Migrations**: Admin UI & Rails-inspired schema migrations
     36 +* **Supported Databases**: Supports PostgreSQL (and its flavors), MS SQL Server and Big Query. Support for more [databases](https://hasura.io/graphql/database/) coming soon.
     37 + 
     38 +Read more at [hasura.io](https://hasura.io) and the [docs](https://hasura.io/docs/).
     39 + 
     40 +## Table of contents
     41 +<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
     42 +**Table of Contents**
     43 + 
     44 +- [Quickstart:](#quickstart)
     45 + - [One-click deployment on Hasura Cloud](#one-click-deployment-on-hasura-cloud)
     46 + - [Other one-click deployment options](#other-one-click-deployment-options)
     47 + - [Other deployment methods](#other-deployment-methods)
     48 +- [Architecture](#architecture)
     49 +- [Client-side tooling](#client-side-tooling)
     50 +- [Add business logic](#add-business-logic)
     51 + - [Remote schemas](#remote-schemas)
     52 + - [Trigger webhooks on database events](#trigger-webhooks-on-database-events)
     53 +- [Demos](#demos)
     54 + - [Realtime applications](#realtime-applications)
     55 + - [Videos](#videos)
     56 +- [Support & Troubleshooting](#support--troubleshooting)
     57 +- [Stay up to date](#stay-up-to-date)
     58 +- [Contributing](#contributing)
     59 +- [Brand assets](#brand-assets)
     60 +- [License](#license)
     61 +- [Translations](#translations)
     62 + 
     63 +<!-- markdown-toc end -->
     64 + 
     65 +## Quickstart:
     66 + 
     67 +### One-click deployment on Hasura Cloud
     68 + 
     69 +The fastest and easiest way to try Hasura out is via [Hasura Cloud](https://hasura.io/docs/latest/graphql/cloud/getting-started/index.html).
     70 + 
     71 +1. Click on the following button to deploy GraphQL engine on Hasura Cloud including Postgres add-on or using an existing Postgres database:
     72 + 
     73 + [![Deploy to Hasura Cloud](https://graphql-engine-cdn.hasura.io/img/deploy_to_hasura.png)](https://cloud.hasura.io/signup)
     74 + 
     75 +2. Open the Hasura console
     76 + 
     77 + Click on the button "Launch console" to open the Hasura console.
     78 + 
     79 +3. Make your first GraphQL query
     80 + 
     81 + Create a table and instantly run your first query. Follow this [simple guide](https://hasura.io/docs/latest/graphql/core/getting-started/first-graphql-query.html).
     82 + 
     83 +### Other one-click deployment options
     84 + 
     85 +Check out the instructions for the following one-click deployment options:
     86 + 
     87 +| **Infra provider** | **One-click link** | **Additional information** |
     88 +|:------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------:|
     89 +| Heroku | [![Deploy to Heroku](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/hasura/graphql-engine-heroku) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/heroku-one-click.html) |
     90 +| DigitalOcean | [![Deploy to DigitalOcean](https://graphql-engine-cdn.hasura.io/img/create_hasura_droplet_200px.png)](https://marketplace.digitalocean.com/apps/hasura?action=deploy&refcode=c4d9092d2c48&utm_source=hasura&utm_campaign=readme) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/digital-ocean-one-click.html#hasura-graphql-engine-digitalocean-one-click-app) |
     91 +| Azure | [![Deploy to Azure](http://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fhasura%2fgraphql-engine%2fmaster%2finstall-manifests%2fazure-container-with-pg%2fazuredeploy.json) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/azure-container-instances-postgres.html) |
     92 +| Render | [![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/render-examples/hasura-graphql) | [docs](https://hasura.io/docs/latest/graphql/core/guides/deployment/render-one-click.html) |
     93 + 
     94 +> Note: The Hasura GraphQL Engine collects anonymous telemetry to understand usage and provide the best experience. Read more [here](https://hasura.io/docs/latest/policies/telemetry/) on what data is collected and the procedure to opt out.
     95 + 
     96 +### Other deployment methods
     97 + 
     98 +For Docker-based deployment and advanced configuration options, see [deployment
     99 +guides](https://hasura.io/docs/latest/graphql/core/getting-started/index.html) or
     100 +[install manifests](install-manifests).
     101 + 
     102 +## Architecture
     103 + 
     104 +The Hasura GraphQL Engine fronts a Postgres database instance and can accept GraphQL requests from your client apps. It can be configured to work with your existing auth system and can handle access control using field-level rules with dynamic variables from your auth system.
     105 + 
     106 +You can also merge remote GraphQL schemas and provide a unified GraphQL API.
     107 + 
     108 +![Hasura GraphQL Engine architecture](assets/hasura-arch.svg)
     109 + 
     110 +## Client-side tooling
     111 + 
     112 +Hasura works with any GraphQL client. See [awesome-graphql](https://github.com/chentsulin/awesome-graphql) for a list of clients. Our [frontend tutorial series](https://hasura.io/learn/#frontend-tutorial) also have integrations with GraphQL clients for different frameworks.
     113 + 
     114 +## Add business logic
     115 + 
     116 +GraphQL Engine provides easy-to-reason, scalable and performant methods for adding custom business logic to your backend:
     117 + 
     118 +### Remote schemas
     119 + 
     120 +Add custom resolvers in a remote schema in addition to Hasura's database-based GraphQL schema. Ideal for use-cases like implementing a payment API, or querying data that is not in your database - [read more](remote-schemas.md).
     121 + 
     122 +### Actions
     123 + 
     124 +Actions are a way to extend Hasura’s schema with custom business logic using custom queries and mutations. Actions can be added to Hasura to handle various use cases such as data validation, data enrichment from external sources and any other complex business logic - [read more](https://hasura.io/docs/latest/graphql/core/actions/index.html)
     125 + 
     126 +### Trigger webhooks on database events
     127 + 
     128 +Add asynchronous business logic that is triggered based on database events.
     129 +Ideal for notifications, data-pipelines from Postgres or asynchronous
     130 +processing - [read more](event-triggers.md).
     131 + 
     132 +### Derived data or data transformations
     133 + 
     134 +Transform data in Postgres or run business logic on it to derive another dataset that can be queried using GraphQL Engine - [read more](https://hasura.io/docs/latest/graphql/core/queries/derived-data.html).
     135 + 
     136 +## Demos
     137 + 
     138 +Check out all the example applications in the [hasura/sample-apps](https://github.com/hasura/sample-apps/tree/main) repository.
     139 + 
     140 +### Realtime applications
     141 + 
     142 +- Group Chat application built with React, includes a typing indicator, online users & new
     143 + message notifications.
     144 + - [Try it out](https://realtime-chat.demo.hasura.io/)
     145 + - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-chat)
     146 + 
     147 +- Live location tracking app that shows a running vehicle changing the current GPS
     148 + coordinates moving on a map.
     149 + - [Try it out](https://realtime-location-tracking.demo.hasura.io/)
     150 + - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-location-tracking)
     151 + 
     152 +- A real-time dashboard for data aggregations on continuously changing data.
     153 + - [Try it out](https://realtime-poll.demo.hasura.io/)
     154 + - [Tutorial](https://github.com/hasura/sample-apps/tree/main/realtime-poll)
     155 + 
     156 +### Videos
     157 + 
     158 +* [Add GraphQL to a self-hosted GitLab instance](https://www.youtube.com/watch?v=a2AhxKqd82Q) (*3:44 mins*)
     159 +* [Todo app with Auth0 and GraphQL backend](https://www.youtube.com/watch?v=15ITBYnccgc) (*4:00 mins*)
     160 +* [GraphQL on GitLab integrated with GitLab auth](https://www.youtube.com/watch?v=m1ChRhRLq7o) (*4:05 mins*)
     161 +* [Dashboard for 10million rides with geo-location (PostGIS, Timescale)](https://www.youtube.com/watch?v=tsY573yyGWA) (*3:06 mins*)
     162 + 
     163 +## Support & Troubleshooting
     164 + 
     165 +The documentation and community will help you troubleshoot most issues. If you have encountered a bug or need to get in touch with us, you can contact us using one of the following channels:
     166 + 
     167 +* Support & feedback: [Discord](https://discord.gg/hasura)
     168 +* Issue & bug tracking: [GitHub issues](https://github.com/hasura/graphql-engine/issues)
     169 +* Follow product updates: [@HasuraHQ](https://twitter.com/hasurahq)
     170 +* Talk to us on our [website chat](https://hasura.io)
     171 + 
     172 +We are committed to fostering an open and welcoming environment in the community. Please see the [Code of Conduct](code-of-conduct.md).
     173 + 
     174 +If you want to report a security issue, please [read this](SECURITY.md).
     175 + 
     176 +## Stay up to date
     177 + 
     178 +We release new features every month. Sign up for our newsletter by using the link below. We send newsletters only once a month.
     179 +[https://hasura.io/newsletter/](https://hasura.io/newsletter/)
     180 + 
     181 +## Contributing
     182 + 
     183 +Check out our [contributing guide](CONTRIBUTING.md) for more details.
     184 + 
     185 +## Brand assets
     186 + 
     187 +Hasura brand assets (logos, the Hasura mascot, powered by badges etc.) can be
     188 +found in the [assets/brand](assets/brand) folder. Feel free to use them in your
     189 +application/website etc. We'd be thrilled if you add the "Powered by Hasura"
     190 +badge to your applications built using Hasura. ❤️
     191 + 
     192 +<div style="display: flex;">
     193 + <img src="assets/brand/powered_by_hasura_primary_darkbg.svg" width="150px"/>
     194 + <img src="assets/brand/powered_by_hasura_primary_lightbg.svg" width="150px"/>
     195 +</div>
     196 + 
     197 +```html
     198 +<!-- For light backgrounds -->
     199 +<a href="https://hasura.io">
     200 + <img width="150px" src="https://graphql-engine-cdn.hasura.io/img/powered_by_hasura_primary_darkbg.svg" />
     201 +</a>
     202 + 
     203 +<!-- For dark backgrounds -->
     204 +<a href="https://hasura.io">
     205 + <img width="150px" src="https://graphql-engine-cdn.hasura.io/img/powered_by_hasura_primary_lightbg.svg" />
     206 +</a>
     207 +```
     208 + 
     209 +## License
     210 + 
     211 +The core GraphQL Engine is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) (Apache-2.0).
     212 + 
     213 +All **other contents** (except those in [`server`](server), [`cli`](cli) and
     214 +[`console`](console) directories) are available under the [MIT License](LICENSE-community).
     215 +This includes everything in the [`docs`](docs) and [`community`](community)
     216 +directories.
     217 + 
     218 +## Translations
     219 + 
     220 +This readme is available in the following translations:
     221 + 
     222 +- [Japanese :jp:](translations/README.japanese.md) (:pray: [@moksahero](https://github.com/moksahero))
     223 +- [French :fr:](translations/README.french.md) (:pray: [@l0ck3](https://github.com/l0ck3))
     224 +- [Bosnian :bosnia_herzegovina:](translations/README.bosnian.md) (:pray: [@hajro92](https://github.com/hajro92))
     225 +- [Russian :ru:](translations/README.russian.md) (:pray: [@highflyer910](https://github.com/highflyer910))
     226 +- [Greek 🇬🇷](translations/README.greek.md) (:pray: [@MIP2000](https://github.com/MIP2000))
     227 +- [Spanish 🇲🇽](/translations/README.mx_spanish.md)(:pray: [@ferdox2](https://github.com/ferdox2))
     228 +- [Indonesian :indonesia:](translations/README.indonesian.md) (:pray: [@anwari666](https://github.com/anwari666))
     229 +- [Brazilian Portuguese :brazil:](translations/README.portuguese_br.md) (:pray: [@rubensmp](https://github.com/rubensmp))
     230 +- [German 🇩🇪](translations/README.german.md) (:pray: [@FynnGrandke](https://github.com/FynnGrandke))
     231 +- [Chinese :cn:](translations/README.chinese.md) (:pray: [@jagreetdg](https://github.com/jagreetdg) & [@johnbanq](https://github.com/johnbanq))
     232 +- [Turkish :tr:](translations/README.turkish.md) (:pray: [@berat](https://github.com/berat))
     233 +- [Korean :kr:](translations/README.korean.md) (:pray: [@라스크](https://github.com/laskdjlaskdj12))
     234 +- [Italian :it:](translations/README.italian.md) (:pray: [@befire](https://github.com/francesca-belfiore))
     235 + 
     236 +Translations for other files can be found [here](translations).
     237 + 
  • assets/hasura_logo_primary_darkbg.png
  • assets/hasura_logo_primary_lightbg.png
  • ■ ■ ■ ■ ■ ■
    cabal/dev-sh-prof-heap-infomap.project.local
    skipped 29 lines
    30 30  -- TODO would be nice to refactor other dev-sh.project.local to use program-options' as well (and force cabal 3.8)
    31 31  program-options
    32 32   ghc-options: -fdistinct-constructor-tables -finfo-table-map
     33 + -- TODO: consider using this combination instead, which we might use eventually in
     34 + -- production (although this is still not sufficient to get the file size down
     35 + -- small enough imo):
     36 + -- ghc-options: -fdistinct-constructor-tables -finfo-table-map -fno-info-table-map-with-stack -fno-info-table-map-with-fallback
    33 37   -- For each module, STG will be dumped to:
    34 38   -- dist-newstyle/**/*.dump-stg-final
    35 39   ghc-options: -ddump-stg-final -ddump-to-file
    skipped 1 lines
  • ■ ■ ■ ■ ■
    cabal.project
    skipped 15 lines
    16 16  --
    17 17  -- See: https://www.haskell.org/cabal/users-guide/nix-local-build.html#configuring-builds-with-cabal-project
    18 18   
    19  -with-compiler: ghc-9.4.5
    20  --- Work around bugs not yet fixed in 9.4.5. These are only enabled with O2
    21  --- which we don't currently use, but disable these defensively
    22  --- https://gitlab.haskell.org/ghc/ghc/-/merge_requests/10282
    23  -package *
    24  - ghc-options:
    25  - -fno-dicts-strict
    26  - -fno-spec-constr
    27  -
     19 +with-compiler: ghc-9.6.4
    28 20   
    29 21  -- package-level parallelism:
    30 22  jobs: $ncpus
    skipped 3 lines
    34 26  packages: server/forks/*/*.cabal
    35 27   
    36 28  -- TODO remove these when we are able:
     29 +allow-newer: req:template-haskell
    37 30  allow-newer: ekg-core:base
    38 31  allow-newer: ekg-core:ghc-prim
    39 32  allow-newer: ekg-core:inspection-testing
    skipped 5 lines
    45 38  allow-newer: ekg-prometheus:bytestring
    46 39  -- Migrating to 0.25+ looks like it will be a real pain... :(
    47 40  -- https://github.com/morpheusgraphql/morpheus-graphql/pull/766
     41 +allow-newer: relude:base
     42 +allow-newer: relude:ghc-prim
    48 43  allow-newer: morpheus-graphql:text
     44 +allow-newer: morpheus-graphql:relude
     45 +allow-newer: morpheus-graphql:vector
     46 +allow-newer: morpheus-graphql:transformers
    49 47  allow-newer: morpheus-graphql-app:text
     48 +allow-newer: morpheus-graphql-app:vector
     49 +allow-newer: morpheus-graphql-app:transformers
    50 50  allow-newer: morpheus-graphql-code-gen:text
     51 +allow-newer: morpheus-graphql-code-gen:optparse-applicative
    51 52  allow-newer: morpheus-graphql-code-gen-utils:text
    52 53  allow-newer: morpheus-graphql-core:text
     54 +allow-newer: morpheus-graphql-core:transformers
     55 +allow-newer: morpheus-graphql-core:vector
    53 56  allow-newer: morpheus-graphql-server:text
     57 +allow-newer: morpheus-graphql-server:transformers
     58 +allow-newer: morpheus-graphql-server:vector
    54 59  allow-newer: morpheus-graphql-client:text
     60 +allow-newer: morpheus-graphql-client:req
     61 +allow-newer: morpheus-graphql-client:transformers
     62 +allow-newer: morpheus-graphql-client:vector
    55 63  allow-newer: morpheus-graphql-subscriptions:text
     64 +allow-newer: morpheus-graphql-subscriptions:transformers
     65 +allow-newer: servant-openapi3:base
     66 +allow-newer: openapi3:base
     67 +allow-newer: servant-client:base
     68 +allow-newer: servant-client:transformers
     69 +allow-newer: servant-client:mtl
     70 +allow-newer: servant-client-core:base
     71 +allow-newer: servant-client-core:free
     72 +allow-newer: servant-client-core:template-haskell
     73 +allow-newer: servant-client-core:transformers
     74 +allow-newer: servant-server:base
     75 +allow-newer: servant-server:template-haskell
     76 +allow-newer: servant-server:transformers
     77 +allow-newer: servant-server:mtl
     78 +allow-newer: servant:base
     79 +allow-newer: servant:mtl
     80 +allow-newer: ghc-heap-view:base
     81 +allow-newer: ghc-heap-view:Cabal
     82 +allow-newer: servant:transformers
     83 +allow-newer: singleton-bool:base
     84 +allow-newer: semigroupoids:base
     85 +allow-newer: http-api-data:base
     86 +allow-newer: validation:assoc
     87 +allow-newer: aeson:th-abstraction
     88 +allow-newer: optics-th:th-abstraction
     89 +allow-newer: generics-sop:th-abstraction
    56 90   
    57 91  -- https://gitlab.haskell.org/ghc/ghc-debug/-/merge_requests/27
    58 92  allow-newer: ghc-debug-stub:ghc-prim
    skipped 36 lines
    95 129  allow-newer: hedgehog-generic:base
    96 130  allow-newer: hedgehog-generic:hedgehog
    97 131   
     132 +-- 9.6 support. Awaiting release I guess...
     133 +source-repository-package
     134 + type: git
     135 + location: https://github.com/agrafix/Spock
     136 + tag: 40d028bfea0e94ca7096c719cd024ca47a46e559
     137 + subdir: Spock-core
     138 + 
     139 +-- 9.6 support: https://github.com/jfischoff/postgres-options/pull/7
     140 +source-repository-package
     141 + type: git
     142 + location: https://github.com/jfischoff/postgres-options.git
     143 + tag: 3100f7ca4319748a07a46e2838f4c80f8e3f076a
     144 + 
     145 +-- 9.6 support: https://github.com/MichaelXavier/cron/pull/51
     146 +source-repository-package
     147 + type: git
     148 + location: https://github.com/TristanCacqueray/cron.git
     149 + tag: 5f5b662a1d7abc3951ea5a2a625bbf3e83f7a11a
     150 + 
    98 151  source-repository-package
    99 152   type: git
    100 153   location: https://github.com/hasura/kriti-lang.git
    skipped 20 lines
    121 174   location: https://github.com/hasura/ekg-core.git
    122 175   tag: df610859603b504494ad770bdbb7053a7f0b9a6c
    123 176   
     177 +-- because we need 27d87f01, not yet released
    124 178  source-repository-package
    125 179   type: git
    126 180   location: https://github.com/snoyberg/yaml.git
    skipped 9 lines
  • ■ ■ ■ ■ ■ ■
    cabal.project.freeze
    1 1  active-repositories: hackage.haskell.org:merge
    2  -constraints: any.Cabal ==3.8.1.0,
    3  - any.Cabal-syntax ==3.8.1.0,
     2 +constraints: any.Cabal ==3.10.1.0,
     3 + any.Cabal-syntax ==3.10.1.0,
    4 4   any.Diff ==0.4.1,
    5 5   any.Glob ==0.10.2,
    6 6   any.HTTP ==4000.4.1,
    7 7   any.HUnit ==1.6.2.0,
    8  - any.OneTuple ==0.3.1,
     8 + any.OneTuple ==0.4.1.1,
    9 9   any.Only ==0.1,
    10 10   any.QuickCheck ==2.14.2,
    11 11   any.RSA ==2.4.1,
    skipped 7 lines
    19 19   any.aeson-optics ==1.2.0.1,
    20 20   any.aeson-pretty ==0.8.9,
    21 21   any.aeson-qq ==0.8.4,
    22  - any.alex ==3.2.7.1,
    23  - any.ansi-terminal ==0.11.3,
    24  - any.ansi-wl-pprint ==0.6.9,
     22 + any.alex ==3.3.0.0,
     23 + any.ansi-terminal ==0.11.5,
     24 + any.ansi-terminal-types ==0.11.5,
     25 + any.ansi-wl-pprint ==1.0.2,
    25 26   any.appar ==0.1.8,
    26  - any.array ==0.5.4.0,
     27 + any.array ==0.5.6.0,
    27 28   any.asn1-encoding ==0.9.6,
    28 29   any.asn1-parse ==0.9.5,
    29 30   any.asn1-types ==0.3.4,
    30  - any.assoc ==1.0.2,
     31 + any.assoc ==1.1,
    31 32   any.async ==2.2.4,
    32 33   any.atomic-primops ==0.8.4,
    33 34   any.attoparsec ==0.14.4,
    skipped 2 lines
    36 37   any.auto-update ==0.1.6,
    37 38   any.autodocodec ==0.2.0.3,
    38 39   any.autodocodec-openapi3 ==0.2.1.1,
    39  - any.barbies ==2.0.3.1,
    40  - any.base ==4.17.1.0,
     40 + any.barbies ==2.0.4.0,
     41 + any.base ==4.18.2.0,
    41 42   any.base-compat ==0.12.2,
    42 43   any.base-compat-batteries ==0.12.2,
    43  - any.base-orphans ==0.8.7,
     44 + any.base-orphans ==0.9.0,
    44 45   any.base-prelude ==1.6.1,
    45 46   any.base16-bytestring ==1.0.2.0,
    46 47   any.base64-bytestring ==1.2.1.0,
    47 48   any.basement ==0.0.15,
    48  - any.bifunctors ==5.5.13,
     49 + any.bifunctors ==5.6.1,
    49 50   any.bimap ==0.5.0,
    50 51   any.binary ==0.8.9.1,
    51 52   any.binary-orphans ==1.0.4.1,
    52  - any.binary-parser ==0.5.7.2,
    53  - any.bitvec ==1.1.3.0,
     53 + any.binary-parser ==0.5.7.3,
     54 + any.bitvec ==1.1.4.0,
    54 55   any.blaze-builder ==0.4.2.2,
    55 56   any.blaze-html ==0.9.1.2,
    56 57   any.blaze-markup ==0.8.2.8,
    57  - any.boring ==0.2,
    58  - any.brick ==1.5,
     58 + any.boring ==0.2.1,
     59 + any.brick ==1.9,
    59 60   any.bsb-http-chunked ==0.0.0.4,
    60 61   any.byteable ==0.1.1,
    61 62   any.byteorder ==1.0.4,
    62  - any.bytestring ==0.11.4.0,
     63 + any.bytestring ==0.11.5.3,
    63 64   any.bytestring-builder ==0.10.8.2.0,
    64  - any.bytestring-lexing ==0.5.0.9,
     65 + any.bytestring-lexing ==0.5.0.10,
    65 66   any.bytestring-strict-builder ==0.4.5.6,
    66 67   any.bytestring-tree-builder ==0.2.7.10,
    67 68   any.cabal-doctest ==1.0.9,
    skipped 3 lines
    71 72   any.cereal ==0.5.8.3,
    72 73   any.charset ==0.3.9,
    73 74   any.clock ==0.8.3,
    74  - any.cmdargs ==0.10.21,
     75 + any.cmdargs ==0.10.22,
    75 76   any.code-page ==0.2.1,
    76 77   any.colour ==2.3.6,
    77 78   any.comonad ==5.0.8,
    78 79   any.concise ==0.1.0.1,
    79  - any.concurrent-output ==1.10.16,
    80  - any.conduit ==1.3.4.3,
     80 + any.concurrent-output ==1.10.18,
     81 + any.conduit ==1.3.5,
    81 82   any.conduit-extra ==1.3.6,
    82 83   any.config-ini ==0.2.5.0,
    83 84   any.connection ==0.3.1,
    84 85   any.constraints ==0.13.4,
    85  - any.constraints-extras ==0.3.2.1,
     86 + any.constraints-extras ==0.4.0.0,
    86 87   any.containers ==0.6.7,
    87 88   any.contravariant ==1.5.5,
    88 89   any.contravariant-extras ==0.3.5.3,
    89  - any.cookie ==0.4.5,
    90  - any.criterion ==1.5.13.0,
    91  - any.criterion-measurement ==0.1.4.0,
     90 + any.cookie ==0.4.6,
     91 + any.criterion ==1.6.3.0,
     92 + any.criterion-measurement ==0.2.1.0,
    92 93   any.cron ==0.7.0,
    93 94   any.crypto-api ==0.13.3,
    94 95   any.crypto-pubkey-types ==0.4.3,
    skipped 17 lines
    112 113   any.data-serializer ==0.3.5,
    113 114   any.data-textual ==0.3.0.3,
    114 115   any.dec ==0.0.5,
    115  - any.deepseq ==1.4.8.0,
    116  - any.deferred-folds ==0.9.18.2,
     116 + any.deepseq ==1.4.8.1,
     117 + any.deferred-folds ==0.9.18.3,
    117 118   any.dense-linear-algebra ==0.1.0.0,
    118 119   any.dependent-map ==0.4.0.0,
    119  - any.dependent-sum ==0.6.2.0,
    120  - any.dependent-sum-template ==0.1.1.1,
    121  - any.directory ==1.3.7.1,
     120 + any.dependent-sum ==0.7.2.0,
     121 + any.dependent-sum-template ==0.2.0.0,
     122 + any.directory ==1.3.8.1,
    122 123   any.distributive ==0.6.2.1,
    123 124   any.dlist ==1.0,
    124 125   any.doctest ==0.21.1,
    125 126   any.double-conversion ==2.0.4.2,
    126  - any.easy-file ==0.2.2,
     127 + any.easy-file ==0.2.5,
    127 128   any.either ==5.0.2,
    128 129   any.ekg-core ==0.1.1.7,
    129 130   any.entropy ==0.4.1.10,
    130 131   any.erf ==2.0.0.0,
    131 132   any.errors ==2.3.0,
    132  - any.exceptions ==0.10.5,
     133 + any.exceptions ==0.10.7,
    133 134   any.extensible-exceptions ==0.1.1.4,
    134  - any.extra ==1.7.12,
     135 + any.extra ==1.7.13,
    135 136   any.fail ==4.9.0.0,
    136  - any.fast-logger ==3.1.2,
     137 + any.fast-logger ==3.2.1,
    137 138   any.file-embed ==0.0.15.0,
    138  - any.filepath ==1.4.2.2,
     139 + any.filepath ==1.4.200.1,
    139 140   any.flush-queue ==1.0.0,
    140  - any.focus ==1.0.3,
     141 + any.focus ==1.0.3.1,
    141 142   any.fold-debounce ==0.2.0.11,
    142  - any.foldl ==1.4.12,
     143 + any.foldl ==1.4.14,
    143 144   any.formatting ==7.2.0,
    144  - any.free ==5.1.9,
    145  - any.generic-lens ==2.2.1.0,
     145 + any.free ==5.2,
     146 + any.generic-lens ==2.2.2.0,
    146 147   any.generic-lens-core ==2.2.1.0,
    147 148   any.generic-monoid ==0.1.0.1,
    148  - any.generically ==0.1,
    149  - any.generics-sop ==0.5.1.2,
    150  - any.ghc ==9.4.5,
     149 + any.generically ==0.1.1,
     150 + any.generics-sop ==0.5.1.3,
     151 + any.ghc ==9.6.4,
    151 152   any.ghc-bignum ==1.3,
    152  - any.ghc-boot ==9.4.5,
    153  - any.ghc-boot-th ==9.4.5,
     153 + any.ghc-boot ==9.6.4,
     154 + any.ghc-boot-th ==9.6.4,
    154 155   any.ghc-debug-convention ==0.4.0.0,
    155 156   any.ghc-debug-stub ==0.4.0.0,
    156  - any.ghc-heap ==9.4.5,
     157 + any.ghc-heap ==9.6.4,
    157 158   any.ghc-heap-view ==0.6.4,
    158 159   any.ghc-paths ==0.1.0.12,
    159  - any.ghc-prim ==0.9.0,
    160  - any.ghci ==9.4.5,
    161  - any.happy ==1.20.0,
    162  - any.hashable ==1.4.1.0,
     160 + any.ghc-prim ==0.10.0,
     161 + any.ghci ==9.6.4,
     162 + any.happy ==1.20.1.1,
     163 + any.hashable ==1.4.2.0,
    163 164   any.hashtables ==1.3.1,
    164  - any.haskell-lexer ==1.1,
     165 + any.haskell-lexer ==1.1.1,
    165 166   any.haskell-src-exts ==1.23.1,
    166  - any.haskell-src-meta ==0.8.11,
     167 + any.haskell-src-meta ==0.8.12,
    167 168   any.hasql ==1.5.0.5,
    168 169   any.hasql-pool ==0.5.2.2,
    169 170   any.hasql-transaction ==1.0.1.1,
    skipped 1 lines
    171 172   any.hedgehog-generic ==0.1,
    172 173   any.hostname ==1.0,
    173 174   any.hourglass ==0.2.12,
    174  - any.hpc ==0.6.1.0,
     175 + any.hpc ==0.6.2.0,
    175 176   any.hs-opentelemetry-otlp ==0.0.1.0,
    176  - any.hsc2hs ==0.68.8,
    177  - any.hspec ==2.10.10,
    178  - any.hspec-core ==2.10.10,
    179  - any.hspec-discover ==2.10.10,
    180  - any.hspec-expectations ==0.8.2,
     177 + any.hsc2hs ==0.68.9,
     178 + any.hspec ==2.11.1,
     179 + any.hspec-core ==2.11.1,
     180 + any.hspec-discover ==2.11.1,
     181 + any.hspec-expectations ==0.8.3,
    181 182   any.hspec-expectations-json ==1.0.0.7,
    182 183   any.hspec-expectations-lifted ==0.10.0,
    183 184   any.hspec-hedgehog ==0.0.1.2,
    skipped 5 lines
    189 190   any.http-date ==0.0.11,
    190 191   any.http-media ==0.8.0.0,
    191 192   any.http-types ==0.12.3,
    192  - any.http2 ==3.0.3,
     193 + any.http2 ==4.1.2,
    193 194   any.hvect ==0.4.0.1,
    194 195   any.immortal ==0.2.2.1,
    195 196   any.indexed-profunctors ==0.1.1,
    196  - any.indexed-traversable ==0.1.2,
    197  - any.indexed-traversable-instances ==0.1.1.1,
    198  - any.insert-ordered-containers ==0.2.5.1,
     197 + any.indexed-traversable ==0.1.2.1,
     198 + any.indexed-traversable-instances ==0.1.1.2,
     199 + any.insert-ordered-containers ==0.2.5.2,
    199 200   any.inspection-testing ==0.5.0.1,
    200 201   any.integer-gmp ==1.1,
    201 202   any.integer-logarithms ==1.0.3.1,
    202  - any.invariant ==0.6.1,
     203 + any.invariant ==0.6.2,
    203 204   any.iproute ==1.7.12,
    204 205   any.iso8601-time ==0.1.5,
    205  - any.isomorphism-class ==0.1.0.7,
    206  - any.jose ==0.9,
    207  - any.jose-jwt ==0.9.4,
     206 + any.isomorphism-class ==0.1.0.9,
     207 + any.jose ==0.10,
     208 + any.jose-jwt ==0.9.5,
    208 209   any.js-chart ==2.9.4.1,
    209 210   any.jwt ==0.11.0,
    210 211   any.kan-extensions ==5.2.5,
    211 212   any.keys ==3.12.3,
    212 213   any.kriti-lang ==0.3.3,
    213 214   any.launchdarkly-server-sdk ==4.0.0,
    214  - any.lens ==5.2.2,
     215 + any.lens ==5.2.3,
    215 216   any.lens-aeson ==1.2.2,
    216 217   any.lens-family ==2.1.2,
    217 218   any.lens-family-core ==2.1.2,
    218 219   any.libdeflate-hs ==0.1.0.0,
    219 220   any.libyaml ==0.1.2,
    220  - any.lifted-async ==0.10.2.3,
     221 + any.lifted-async ==0.10.2.4,
    221 222   any.lifted-base ==0.2.3.12,
    222  - any.list-t ==1.0.5.3,
    223  - any.logict ==0.8.0.0,
     223 + any.list-t ==1.0.5.6,
     224 + any.logict ==0.8.1.0,
    224 225   any.lrucache ==1.2.0.1,
    225 226   any.lucid2 ==0.0.20221012,
    226  - any.managed ==1.0.9,
    227  - any.markdown-unlit ==0.5.1,
     227 + any.managed ==1.0.10,
     228 + any.markdown-unlit ==0.6.0,
    228 229   any.math-functions ==0.3.4.2,
    229  - any.megaparsec ==9.2.2,
     230 + any.megaparsec ==9.3.1,
    230 231   any.memory ==0.18.0,
    231  - any.microlens ==0.4.13.0,
    232  - any.microlens-mtl ==0.2.0.2,
    233  - any.microlens-th ==0.4.3.10,
     232 + any.microlens ==0.4.13.1,
     233 + any.microlens-mtl ==0.2.0.3,
     234 + any.microlens-th ==0.4.3.14,
    234 235   any.microstache ==1.0.2.3,
    235 236   any.mime-types ==0.1.1.0,
    236 237   any.mmorph ==1.2.0,
    237 238   any.modern-uri ==0.3.6.0,
    238 239   any.monad-control ==1.0.3.1,
    239  - any.monad-logger ==0.3.37,
     240 + any.monad-logger ==0.3.39,
    240 241   any.monad-loops ==0.4.3,
    241 242   any.monad-time ==0.4.0.0,
    242  - any.monad-validate ==1.2.0.1,
     243 + any.monad-validate ==1.3.0.0,
    243 244   any.mono-traversable ==1.0.15.3,
    244 245   any.morpheus-graphql ==0.24.3,
    245 246   any.morpheus-graphql-app ==0.24.3,
    skipped 3 lines
    249 250   any.morpheus-graphql-core ==0.24.3,
    250 251   any.morpheus-graphql-server ==0.24.3,
    251 252   any.morpheus-graphql-subscriptions ==0.24.3,
    252  - any.mtl ==2.2.2,
     253 + any.mtl ==2.3.1,
    253 254   any.mtl-compat ==0.2.2,
    254  - any.mustache ==2.4.1,
     255 + any.mustache ==2.4.2,
    255 256   any.mwc-random ==0.15.0.2,
    256 257   any.natural-transformation ==0.4,
    257  - any.network ==3.1.2.7,
     258 + any.network ==3.1.4.0,
    258 259   any.network-bsd ==2.8.1.0,
    259 260   any.network-byte-order ==0.1.6,
    260 261   any.network-info ==0.2.1,
    261 262   any.network-ip ==0.3.0.3,
    262  - any.network-uri ==2.6.4.1,
     263 + any.network-uri ==2.6.4.2,
    263 264   any.nonempty-containers ==0.3.4.4,
    264  - any.nonempty-vector ==0.2.1.0,
     265 + any.nonempty-vector ==0.2.2.0,
    265 266   any.odbc ==0.2.7,
    266 267   any.old-locale ==1.0.0.7,
    267 268   any.old-time ==1.1.0.3,
    268  - any.openapi3 ==3.2.2,
     269 + any.openapi3 ==3.2.3,
    269 270   any.optics-core ==0.4.1,
    270 271   any.optics-extra ==0.4.2.1,
    271 272   any.optics-th ==0.4.1,
    272  - any.optparse-applicative ==0.16.1.0,
    273  - any.optparse-generic ==1.4.8,
     273 + any.optparse-applicative ==0.18.1.0,
     274 + any.optparse-generic ==1.5.1,
    274 275   any.parallel ==3.2.2.0,
    275  - any.parsec ==3.1.15.1,
     276 + any.parsec ==3.1.16.1,
    276 277   any.parser-combinators ==1.3.0,
    277 278   any.parsers ==0.12.11,
    278 279   any.pcre-light ==0.4.1.0,
    skipped 1 lines
    280 281   any.pointed ==5.0.4,
    281 282   any.postgres-options ==0.2.0.0,
    282 283   any.postgresql-binary ==0.12.5,
    283  - any.postgresql-libpq ==0.9.4.3,
     284 + any.postgresql-libpq ==0.9.5.0,
    284 285   any.postgresql-simple ==0.6.5,
    285 286   any.pretty ==1.1.3.6,
    286 287   any.pretty-show ==1.10,
    287 288   any.pretty-simple ==4.1.2.0,
    288 289   any.prettyprinter ==1.7.1,
    289 290   any.prettyprinter-ansi-terminal ==1.1.3,
     291 + any.prettyprinter-compat-ansi-wl-pprint ==1.0.2,
    290 292   any.primitive ==0.7.4.0,
    291  - any.primitive-extras ==0.10.1.5,
     293 + any.primitive-extras ==0.10.1.6,
    292 294   any.primitive-unlifted ==0.1.3.1,
    293  - any.process ==1.6.16.0,
     295 + any.process ==1.6.17.0,
    294 296   any.profunctors ==5.6.2,
    295  - any.proto-lens ==0.7.1.2,
    296  - any.proto-lens-runtime ==0.7.0.3,
     297 + any.proto-lens ==0.7.1.3,
     298 + any.proto-lens-runtime ==0.7.0.4,
    297 299   any.psqueues ==0.2.7.3,
    298  - any.quickcheck-instances ==0.3.28,
     300 + any.quickcheck-instances ==0.3.29.1,
    299 301   any.quickcheck-io ==0.2.0,
    300 302   any.random ==1.2.1.1,
    301 303   any.raw-strings-qq ==1.1,
    302  - any.recv ==0.0.0,
    303  - any.refined ==0.8,
    304  - any.reflection ==2.1.6,
     304 + any.recv ==0.1.0,
     305 + any.refined ==0.8.1,
     306 + any.reflection ==2.1.7,
    305 307   any.regex-base ==0.94.0.2,
    306 308   any.regex-posix ==0.96.0.1,
    307  - any.regex-tdfa ==1.3.2,
     309 + any.regex-tdfa ==1.3.2.1,
    308 310   any.relude ==1.2.0.0,
    309 311   any.req ==3.13.0,
    310 312   any.reroute ==0.7.0.0,
    311 313   any.resourcet ==1.2.6,
    312  - any.retry ==0.9.3.0,
     314 + any.retry ==0.9.3.1,
    313 315   any.rts ==1.0.2,
    314 316   any.safe ==0.3.19,
    315 317   any.safe-exceptions ==0.1.7.3,
    316  - any.sandwich ==0.1.3.0,
     318 + any.sandwich ==0.1.5.1,
    317 319   any.scanner ==0.3.1,
    318 320   any.scientific ==0.3.7.0,
    319  - any.semialign ==1.2.0.1,
     321 + any.semialign ==1.3,
    320 322   any.semigroupoids ==5.3.7,
    321 323   any.semigroups ==0.20,
    322 324   any.semver ==0.4.0.1,
    323  - any.servant ==0.19.1,
    324  - any.servant-client ==0.19,
    325  - any.servant-client-core ==0.19,
    326  - any.servant-openapi3 ==2.0.1.5,
    327  - any.servant-server ==0.19.2,
    328  - any.setenv ==0.1.1.3,
    329  - any.shakespeare ==2.0.30,
    330  - any.simple-sendfile ==0.2.30,
     325 + any.servant ==0.20,
     326 + any.servant-client ==0.20,
     327 + any.servant-client-core ==0.20,
     328 + any.servant-openapi3 ==2.0.1.6,
     329 + any.servant-server ==0.20,
     330 + any.shakespeare ==2.1.0,
     331 + any.simple-sendfile ==0.2.31,
    331 332   any.singleton-bool ==0.1.6,
    332  - any.smallcheck ==1.2.1,
     333 + any.smallcheck ==1.2.1.1,
    333 334   any.socks ==0.6.1,
    334 335   any.some ==1.0.5,
    335 336   any.sop-core ==0.5.0.2,
    skipped 1 lines
    337 338   any.splitmix ==0.1.0.4,
    338 339   any.statistics ==0.16.2.0,
    339 340   any.stm ==2.5.1.0,
    340  - any.stm-chans ==3.0.0.6,
    341  - any.stm-containers ==1.2,
     341 + any.stm-chans ==3.0.0.9,
     342 + any.stm-containers ==1.2.0.2,
    342 343   any.stm-delay ==0.1.1.1,
    343  - any.stm-hamt ==1.2.0.8,
    344  - any.streaming-commons ==0.2.2.5,
    345  - any.strict ==0.4.0.1,
     344 + any.stm-hamt ==1.2.0.11,
     345 + any.streaming-commons ==0.2.2.6,
     346 + any.strict ==0.5,
    346 347   any.string-conversions ==0.4.0.1,
    347  - any.string-interpolate ==0.3.2.0,
     348 + any.string-interpolate ==0.3.2.1,
    348 349   any.superbuffer ==0.3.1.2,
    349  - any.syb ==0.7.2.2,
     350 + any.syb ==0.7.2.3,
    350 351   any.system-cxx-std-lib ==1.0,
    351  - any.system-filepath ==0.4.14,
    352  - any.tagged ==0.8.6.1,
    353  - any.tasty ==1.4.2.3,
    354  - any.tasty-bench ==0.3.2,
    355  - any.template-haskell ==2.19.0.0,
    356  - any.template-haskell-compat-v0208 ==0.1.9.1,
     352 + any.tagged ==0.8.7,
     353 + any.tasty ==1.4.3,
     354 + any.tasty-bench ==0.3.4,
     355 + any.template-haskell ==2.20.0.0,
     356 + any.template-haskell-compat-v0208 ==0.1.9.2,
    357 357   any.temporary ==1.3,
    358  - any.terminal-size ==0.3.3,
    359  - any.terminfo ==0.4.1.5,
     358 + any.terminal-size ==0.3.4,
     359 + any.terminfo ==0.4.1.6,
    360 360   any.test-framework ==0.8.2.0,
    361 361   any.test-framework-hunit ==0.3.0.2,
    362 362   any.testcontainers ==0.5.0.0,
    363  - any.text ==2.0.1,
     363 + any.text ==2.0.2,
    364 364   any.text-builder ==0.6.7,
    365  - any.text-builder-dev ==0.3.3,
     365 + any.text-builder-dev ==0.3.3.2,
    366 366   any.text-conversions ==0.3.1.1,
    367 367   any.text-latin1 ==0.3.1,
    368 368   any.text-printer ==0.5.0.2,
    369 369   any.text-short ==0.1.5,
    370  - any.text-zipper ==0.12,
     370 + any.text-zipper ==0.13,
    371 371   any.tf-random ==0.5,
    372  - any.th-abstraction ==0.4.5.0,
     372 + any.th-abstraction ==0.6.0.0,
    373 373   any.th-compat ==0.1.4,
    374  - any.th-expand-syns ==0.4.10.0,
    375  - any.th-extras ==0.0.0.6,
    376  - any.th-lift ==0.8.2,
     374 + any.th-expand-syns ==0.4.11.0,
     375 + any.th-lift ==0.8.4,
    377 376   any.th-lift-instances ==0.1.20,
    378 377   any.th-orphans ==0.13.14,
    379 378   any.th-reify-many ==0.1.10,
    380  - any.these ==1.1.1.1,
     379 + any.these ==1.2,
    381 380   any.these-skinny ==0.7.5,
    382 381   any.time ==1.12.2,
    383 382   any.time-compat ==1.9.6.1,
    384 383   any.time-locale-compat ==0.1.1.5,
    385 384   any.time-manager ==0.0.0,
    386 385   any.tls ==1.6.0,
    387  - any.transformers ==0.5.6.2,
     386 + any.transformers ==0.6.1.0,
    388 387   any.transformers-base ==0.4.6,
    389 388   any.transformers-compat ==0.7.2,
    390 389   any.type-equality ==1,
    391 390   any.type-hint ==0.1,
    392  - any.typed-process ==0.2.10.1,
     391 + any.typed-process ==0.2.11.0,
    393 392   any.unagi-chan ==0.4.1.4,
    394  - any.unbounded-delays ==0.1.1.1,
    395  - any.unix ==2.7.3,
    396  - any.unix-compat ==0.6,
    397  - any.unix-time ==0.4.8,
    398  - any.unliftio ==0.2.23.0,
    399  - any.unliftio-core ==0.2.0.1,
     393 + any.unix ==2.8.4.0,
     394 + any.unix-compat ==0.7,
     395 + any.unix-time ==0.4.9,
     396 + any.unliftio ==0.2.24.0,
     397 + any.unliftio-core ==0.2.1.0,
    400 398   any.unordered-containers ==0.2.19.1,
    401 399   any.uri-bytestring ==0.3.3.1,
    402 400   any.uri-encode ==1.5.0.7,
    skipped 8 lines
    411 409   any.vector ==0.12.3.1,
    412 410   any.vector-algorithms ==0.9.0.1,
    413 411   any.vector-binary-instances ==0.2.5.2,
    414  - any.vector-instances ==3.4,
     412 + any.vector-instances ==3.4.2,
    415 413   any.vector-th-unbox ==0.2.2,
    416 414   any.void ==0.7.3,
    417  - any.vty ==5.37,
     415 + any.vty ==5.38,
    418 416   any.wai ==3.2.3,
    419 417   any.wai-app-static ==3.1.7.4,
    420 418   any.wai-extra ==3.1.13.0,
    421 419   any.wai-logger ==2.4.0,
    422  - any.warp ==3.3.23,
    423  - any.wcwidth ==0.0.2,
     420 + any.warp ==3.3.25,
    424 421   any.websockets ==0.12.7.3,
    425 422   any.wide-word ==0.1.5.0,
    426  - any.witch ==1.1.2.0,
     423 + any.witch ==1.2.0.2,
    427 424   any.witherable ==0.4.2,
    428 425   any.wl-pprint-annotated ==0.1.0.1,
    429 426   any.word-wrap ==0.5,
    skipped 10 lines
    440 437   any.xml-types ==0.3.8,
    441 438   any.yaml ==0.11.10.0,
    442 439   any.zlib ==0.6.3.0,
    443  -index-state: hackage.haskell.org 2023-04-26T15:43:24Z
     440 +index-state: hackage.haskell.org 2023-09-27T18:59:39Z
    444 441   
  • ■ ■ ■ ■
    cli/README.md
    skipped 18 lines
    19 19  
    20 20   You can also install a specific version of the CLI by providing the `VERSION` variable:
    21 21   ```bash
    22  - curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.36.0 bash
     22 + curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
    23 23   ```
    24 24   
    25 25  - Windows
    skipped 32 lines
  • ■ ■ ■ ■ ■ ■
    cli/get.sh
    skipped 43 lines
    44 44  # version=${VERSION:-`echo $(curl -s -f -H 'Content-Type: application/json' \
    45 45   # https://releases.hasura.io/graphql-engine?agent=cli-get.sh) | sed -n -e "s/^.*\"$release\":\"\([^\",}]*\)\".*$/\1/p"`}
    46 46   
    47  -version=${VERSION:-v2.36.0}
     47 +version=${VERSION:-v2.37.0}
    48 48   
    49 49  if [ ! $version ]; then
    50 50   log "${YELLOW}"
    skipped 11 lines
    62 62   
    63 63  log "${YELLOW}"
    64 64  log NOTE: Install a specific version of the CLI by using VERSION variable
    65  -log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.36.0 bash'
     65 +log 'curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash'
    66 66  log "${NC}"
    67 67   
    68 68  # check for existing hasura installation
    skipped 92 lines
  • ■ ■ ■ ■ ■
    cli/internal/metadataobject/api_limits/api_limits.go
    skipped 49 lines
    50 50   DepthLimit yaml.Node `yaml:"depth_limit,omitempty"`
    51 51   NodeLimit yaml.Node `yaml:"node_limit,omitempty"`
    52 52   TimeLimit yaml.Node `yaml:"time_limit,omitempty"`
     53 + BatchLimit yaml.Node `yaml:"batch_limit,omitempty"`
    53 54  }
    54 55   
    55 56  func (o *MetadataObject) Build() (map[string]interface{}, error) {
    skipped 69 lines
  • ■ ■ ■ ■ ■
    cli/internal/metadataobject/api_limits/api_limits_test.go
    skipped 100 lines
    101 101   map[string][]byte{
    102 102   "testdata/export_test/api_limits.yaml": []byte(`disabled: false
    103 103  rate_limit:
    104  - per_role: {}
    105 104   global:
     105 + max_reqs_per_min: 30
    106 106   unique_params: IP
    107  - max_reqs_per_min: 1
     107 + per_role:
     108 + user:
     109 + max_reqs_per_min: 10
     110 + unique_params: null
     111 +depth_limit:
     112 + global: 5
     113 + per_role:
     114 + user: 2
     115 +node_limit:
     116 + global: 5
     117 + per_role:
     118 + user: 2
     119 +time_limit:
     120 + global: 30
     121 + per_role:
     122 + user: 10
     123 +batch_limit:
     124 + global: 5
     125 + per_role:
     126 + user: 2
    108 127  `)},
    109 128   false,
    110 129   require.NoError,
    skipped 22 lines
  • ■ ■ ■ ■ ■ ■
    cli/internal/metadataobject/api_limits/testdata/build_test/t1/metadata/api_limits.yaml
     1 +batch_limit:
     2 + global: 5
     3 + per_role:
     4 + user: 2
     5 +depth_limit:
     6 + global: 5
     7 + per_role:
     8 + user: 2
    1 9  disabled: false
     10 +node_limit:
     11 + global: 5
     12 + per_role:
     13 + user: 2
    2 14  rate_limit:
    3  - per_role: {}
    4 15   global:
     16 + max_reqs_per_min: 30
    5 17   unique_params: IP
    6  - max_reqs_per_min: 1
     18 + per_role:
     19 + user:
     20 + max_reqs_per_min: 10
     21 + unique_params: null
     22 +time_limit:
     23 + global: 30
     24 + per_role:
     25 + user: 10
     26 + 
  • ■ ■ ■ ■
    cli/internal/metadataobject/api_limits/testdata/build_test/t1/want.golden.json
    1  -{"api_limits": {"disabled": false, "rate_limit": {"per_role": {}, "global": {"unique_params": "IP", "max_reqs_per_min": 1}}}}
     1 +{"api_limits": {"disabled": false, "rate_limit": {"global": {"max_reqs_per_min": 30, "unique_params": "IP"}, "per_role": {"user": {"max_reqs_per_min": 10, "unique_params": null}}}, "depth_limit": {"global": 5, "per_role": {"user": 2}}, "node_limit": {"global": 5, "per_role": {"user": 2}}, "time_limit": {"global": 30, "per_role": {"user": 10}}, "batch_limit": {"global": 5, "per_role": {"user": 2}}}}
    2 2   
  • ■ ■ ■ ■ ■ ■
    cli/internal/metadataobject/api_limits/testdata/export_test/metadata.json
    skipped 3 lines
    4 4   "api_limits": {
    5 5   "disabled": false,
    6 6   "rate_limit": {
    7  - "per_role": {},
    8  - "global": {
    9  - "unique_params": "IP",
    10  - "max_reqs_per_min": 1
     7 + "global": { "max_reqs_per_min": 30, "unique_params": "IP" },
     8 + "per_role": {
     9 + "user": { "max_reqs_per_min": 10, "unique_params": null }
    11 10   }
    12  - }
     11 + },
     12 + "depth_limit": { "global": 5, "per_role": { "user": 2 } },
     13 + "node_limit": { "global": 5, "per_role": { "user": 2 } },
     14 + "time_limit": { "global": 30, "per_role": { "user": 10 } },
     15 + "batch_limit": { "global": 5, "per_role": { "user": 2 } }
    13 16   }
    14 17  }
     18 + 
  • ■ ■ ■ ■
    dc-agents/dc-api-types/package.json
    1 1  {
    2 2   "name": "@hasura/dc-api-types",
    3  - "version": "0.43.0",
     3 + "version": "0.44.0",
    4 4   "description": "Hasura GraphQL Engine Data Connector Agent API types",
    5 5   "author": "Hasura (https://github.com/hasura/graphql-engine)",
    6 6   "license": "Apache-2.0",
    skipped 31 lines
  • ■ ■ ■ ■ ■ ■
    dc-agents/dc-api-types/src/agent.openapi.json
    1 1  {
    2  - "openapi": "3.0.0",
    3 2   "info": {
    4 3   "title": "",
    5 4   "version": ""
    skipped 3622 lines
    3628 3627   "type": "object"
    3629 3628   }
    3630 3629   }
    3631  - }
     3630 + },
     3631 + "openapi": "3.0.0"
    3632 3632  }
    3633 3633   
  • ■ ■ ■ ■ ■ ■
    dc-agents/package-lock.json
    skipped 23 lines
    24 24   },
    25 25   "dc-api-types": {
    26 26   "name": "@hasura/dc-api-types",
    27  - "version": "0.43.0",
     27 + "version": "0.44.0",
    28 28   "license": "Apache-2.0",
    29 29   "devDependencies": {
    30 30   "@tsconfig/node16": "^1.0.3",
    skipped 2196 lines
    2227 2227   "license": "Apache-2.0",
    2228 2228   "dependencies": {
    2229 2229   "@fastify/cors": "^8.1.0",
    2230  - "@hasura/dc-api-types": "0.43.0",
     2230 + "@hasura/dc-api-types": "0.44.0",
    2231 2231   "fastify": "^4.13.0",
    2232 2232   "mathjs": "^11.0.0",
    2233 2233   "pino-pretty": "^8.0.0",
    skipped 313 lines
    2547 2547   "license": "Apache-2.0",
    2548 2548   "dependencies": {
    2549 2549   "@fastify/cors": "^8.1.0",
    2550  - "@hasura/dc-api-types": "0.43.0",
     2550 + "@hasura/dc-api-types": "0.44.0",
    2551 2551   "fastify": "^4.13.0",
    2552 2552   "fastify-metrics": "^9.2.1",
    2553 2553   "nanoid": "^3.3.4",
    skipped 314 lines
    2868 2868   "version": "file:reference",
    2869 2869   "requires": {
    2870 2870   "@fastify/cors": "^8.1.0",
    2871  - "@hasura/dc-api-types": "0.43.0",
     2871 + "@hasura/dc-api-types": "0.44.0",
    2872 2872   "@tsconfig/node16": "^1.0.3",
    2873 2873   "@types/node": "^16.11.49",
    2874 2874   "@types/xml2js": "^0.4.11",
    skipped 205 lines
    3080 3080   "version": "file:sqlite",
    3081 3081   "requires": {
    3082 3082   "@fastify/cors": "^8.1.0",
    3083  - "@hasura/dc-api-types": "0.43.0",
     3083 + "@hasura/dc-api-types": "0.44.0",
    3084 3084   "@tsconfig/node16": "^1.0.3",
    3085 3085   "@types/node": "^16.11.49",
    3086 3086   "@types/sqlite3": "^3.1.8",
    skipped 1793 lines
  • ■ ■ ■ ■ ■ ■
    dc-agents/reference/package-lock.json
    skipped 9 lines
    10 10   "license": "Apache-2.0",
    11 11   "dependencies": {
    12 12   "@fastify/cors": "^8.1.0",
    13  - "@hasura/dc-api-types": "0.43.0",
     13 + "@hasura/dc-api-types": "0.44.0",
    14 14   "fastify": "^4.13.0",
    15 15   "mathjs": "^11.0.0",
    16 16   "pino-pretty": "^8.0.0",
    skipped 35 lines
    52 52   "integrity": "sha512-lgHwxlxV1qIg1Eap7LgIeoBWIMFibOjbrYPIPJZcI1mmGAI2m3lNYpK12Y+GBdPQ0U1hRwSord7GIaawz962qQ=="
    53 53   },
    54 54   "node_modules/@hasura/dc-api-types": {
    55  - "version": "0.43.0",
     55 + "version": "0.44.0",
    56 56   "license": "Apache-2.0",
    57 57   "devDependencies": {
    58 58   "@tsconfig/node16": "^1.0.3",
    skipped 1133 lines
  • ■ ■ ■ ■
    dc-agents/reference/package.json
    skipped 21 lines
    22 22   },
    23 23   "dependencies": {
    24 24   "@fastify/cors": "^8.1.0",
    25  - "@hasura/dc-api-types": "0.43.0",
     25 + "@hasura/dc-api-types": "0.44.0",
    26 26   "fastify": "^4.13.0",
    27 27   "mathjs": "^11.0.0",
    28 28   "pino-pretty": "^8.0.0",
    skipped 11 lines
  • ■ ■ ■ ■ ■ ■
    dc-agents/sqlite/package-lock.json
    skipped 9 lines
    10 10   "license": "Apache-2.0",
    11 11   "dependencies": {
    12 12   "@fastify/cors": "^8.1.0",
    13  - "@hasura/dc-api-types": "0.43.0",
     13 + "@hasura/dc-api-types": "0.44.0",
    14 14   "fastify": "^4.13.0",
    15 15   "fastify-metrics": "^9.2.1",
    16 16   "nanoid": "^3.3.4",
    skipped 40 lines
    57 57   "integrity": "sha512-lgHwxlxV1qIg1Eap7LgIeoBWIMFibOjbrYPIPJZcI1mmGAI2m3lNYpK12Y+GBdPQ0U1hRwSord7GIaawz962qQ=="
    58 58   },
    59 59   "node_modules/@hasura/dc-api-types": {
    60  - "version": "0.43.0",
     60 + "version": "0.44.0",
    61 61   "license": "Apache-2.0",
    62 62   "devDependencies": {
    63 63   "@tsconfig/node16": "^1.0.3",
    skipped 2049 lines
  • ■ ■ ■ ■
    dc-agents/sqlite/package.json
    skipped 21 lines
    22 22   },
    23 23   "dependencies": {
    24 24   "@fastify/cors": "^8.1.0",
    25  - "@hasura/dc-api-types": "0.43.0",
     25 + "@hasura/dc-api-types": "0.44.0",
    26 26   "fastify-metrics": "^9.2.1",
    27 27   "fastify": "^4.13.0",
    28 28   "nanoid": "^3.3.4",
    skipped 16 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/actions/action-handlers.mdx
    skipped 249 lines
    250 250   
    251 251  :::info Note
    252 252   
     253 +In case there are multiple headers with the same name, the order of precedence is: client headers \> resolved user
     254 +(`x-hasura-*`) variables \> configuration headers.
     255 + 
     256 +If you want to change the order of precedence to: configuration headers \> resolved user (`x-hasura-*`) variables
     257 +\> client headers, use the [configured header
     258 +precedence](deployment/graphql-engine-flags/reference.mdx/#configured-header-precedence) flag or environment variable.
     259 + 
     260 +:::
     261 + 
     262 +:::info Note
     263 + 
    253 264  Before creating an action via the
    254 265  [create_action Metadata API](/api-reference/metadata-api/actions.mdx#metadata-create-action), all custom types need to
    255 266  be defined via the [set_custom_types](/api-reference/metadata-api/custom-types.mdx#metadata-set-custom-types) Metadata
    skipped 54 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/api-reference/syntax-defs.mdx
    skipped 181 lines
    182 182  | --------------------- | -------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
    183 183  | max_connections | false | `Integer` | Maximum number of connections to be kept in the pool (default: 50) |
    184 184  | total_max_connections | false | `Integer` | Maximum number of total connections to be maintained across any number of Hasura Cloud instances (default: 1000). Takes precedence over `max_connections` in Cloud projects. _(Only available in Hasura Cloud)_ |
    185  -| idle_timeout | false | `Integer` | The idle timeout (in seconds) per connection (default: 180) |
     185 +| idle_timeout | false | `Integer` | The idle timeout (in seconds) per connection (180 seconds for self-hosted & 30 seconds for Cloud). |
    186 186  | retries | false | `Integer` | Number of retries to perform when failing to acquire connection (default: 1). Note that this configuration does not affect user/statement errors on PG. |
    187 187  | pool_timeout | false | `Integer` | Maximum time to wait while acquiring a Postgres connection from the pool, in seconds (default: forever) |
    188 188  | connection_lifetime | false | `Integer` | Time from connection creation after which the connection should be destroyed and a new one created. A value of 0 indicates we should never destroy an active connection. If 0 is passed, memory from large query results may not be reclaimed. (default: 600 sec) |
    skipped 18 lines
    207 207  | --------------------- | -------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
    208 208  | max_connections | false | `Integer` | Maximum number of connections to be kept in the pool (default: 50) |
    209 209  | total_max_connections | false | `Integer` | Maximum number of total connections across any number of Hasura Cloud instances (default: 50). Takes precedence over `max_connections` in Cloud projects. _(Only available in Cloud)_ |
    210  -| idle_timeout | false | `Integer` | The idle timeout (in seconds) per connection (default: 180) |
     210 +| idle_timeout | false | `Integer` | The idle timeout (in seconds) per connection (default: 5) |
    211 211   
    212 212  This schema indicates that the source does not use a connection pool:
    213 213   
    skipped 1609 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/athena/getting-started/docker.mdx
    skipped 35 lines
    36 36  the Amazon Athena GraphQL Connector agent. By navigating to the Hasura Console after execution, you'll find the Amazon
    37 37  Athena data source as a type that can now be added to your Hasura GraphQL Service instance.
    38 38   
     39 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     40 +endpoint:
     41 + 
     42 +```bash
     43 +http://localhost:8081/api/v1/athena/health
     44 +```
     45 + 
     46 +:::info Data connector
     47 + 
     48 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     49 + 
     50 +:::
     51 + 
    39 52  ## Keep up to date
    40 53   
    41 54  :::info Note
    skipped 14 lines
  • ■ ■ ■ ■ ■
    docs/docs/databases/clickhouse/getting-started/docker.mdx
    skipped 32 lines
    33 33  the ClickHouse GraphQL Connector agent. By navigating to the Hasura Console after execution, you'll find the ClickHouse
    34 34  data source as a type that can now be added to your Hasura GraphQL Service instance.
    35 35   
    36  -You can follow the instructions from [this step](/databases/clickhouse/getting-started/cloud.mdx#step-22-next-choose-the-clickhouse-driver) onward to connect to your ClickHouse instance and begin using it with
    37  -Hasura.
     36 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     37 +endpoint:
     38 + 
     39 +```bash
     40 +http://localhost:8080/health
     41 +```
     42 + 
     43 +You can follow the instructions from
     44 +[this step](/databases/clickhouse/getting-started/cloud.mdx#step-22-next-choose-the-clickhouse-driver) onward to connect
     45 +to your ClickHouse instance and begin using it with Hasura.
    38 46   
    39 47  ## Keep up to date
    40 48   
    skipped 15 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/data-connectors/adding-data-connectors.mdx
    skipped 57 lines
    58 58  </li>
    59 59  <li>
    60 60   
    61  -Enter the values for name and Agent endpoint. Click `Connect` and you're done!
     61 +Enter the values for agent name and agent URL. You can also choose to put the agent URL in an environment variable instead and use that variable name here. Click `Connect` and you're done!
    62 62   
    63 63  <Thumbnail
    64 64   src="/img/databases/data-connector/connect-final.png"
    skipped 23 lines
    88 88   uri: <data-connector-agent-url>
    89 89  ```
    90 90   
     91 +Alternatively, you can provide the Data Connector Agent URL via an environment variable:
     92 + 
     93 +```yaml
     94 +dataconnector:
     95 + sqlite:
     96 + uri:
     97 + from_env: <data-connector-agent-url-environment-variable-name>
     98 +```
     99 + 
    91 100  Apply the Metadata by running:
    92 101   
    93 102  ```yaml
    skipped 16 lines
    110 119   "args": {
    111 120   "name": "sqlite",
    112 121   "url": "<url-where-data-connector-agent-is-deployed>"
     122 + }
     123 +}
     124 +```
     125 + 
     126 +Alternatively, you can provide the Data Connector Agent URL via an environment variable:
     127 + 
     128 +```http
     129 +POST /v1/metadata HTTP/1.1
     130 +Content-Type: application/json
     131 +X-Hasura-Role: admin
     132 + 
     133 +{
     134 + "type": "dc_add_agent",
     135 + "args": {
     136 + "name": "sqlite",
     137 + "url": {
     138 + "from_env": "<data-connector-agent-url-environment-variable-name>"
     139 + }
    113 140   }
    114 141  }
    115 142  ```
    skipped 4 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/mariadb/docker.mdx
    skipped 119 lines
    120 120  7b5b2ee70ece hasura/graphql-data-connector ... 1m ago Up 1m 5005/tcp ..
    121 121  ```
    122 122   
     123 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     124 +endpoint:
     125 + 
     126 +```bash
     127 +http://localhost:8081/api/v1/mariadb/health
     128 +```
     129 + 
     130 +:::info Data connector
     131 + 
     132 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     133 + 
     134 +:::
     135 + 
    123 136  ### Step 4: Load the Hasura Console
    124 137   
    125 138  Open the Hasura Console by navigating to `http://localhost:8080/console`. You will need to input your admin secret key
    skipped 83 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/mongodb/index.mdx
    skipped 28 lines
    29 29   
    30 30  Hasura currently supports queries, table relationships, remote relationships and permissions on MongoDB databases.
    31 31   
    32  -A [logical model](/schema/mongodb/logical-models.mdx) or database [validation schema](https://www.mongodb.com/docs/upcoming/core/schema-validation/specify-json-schema/#std-label-schema-validation-json) is required for generating your GraphQL schema.
     32 +A [logical model](/schema/mongodb/logical-models.mdx) or database
     33 +[validation schema](https://www.mongodb.com/docs/upcoming/core/schema-validation/specify-json-schema/#std-label-schema-validation-json)
     34 +is required for generating your GraphQL schema.
    33 35   
    34 36  :::
    35 37   
    skipped 7 lines
    43 45  - In Hasura Cloud, check out our [Getting Started with MongoDB in Hasura Cloud](/databases/mongodb/cloud.mdx) guide
    44 46  - In a Docker environment, check out our [Getting Started with Docker](/databases/mongodb/docker.mdx) guide
    45 47   
     48 +## Feature Support
     49 + 
     50 +<div className="feature-matrix-tables">
     51 + 
     52 +<table>
     53 + <tr>
     54 + <td>Feature</td>
     55 + <td>MongoDB</td>
     56 + </tr>
     57 + <tr>
     58 + <td>Remote Relationships</td>
     59 + <td>✅</td>
     60 + </tr>
     61 + <tr>
     62 + <td>Views</td>
     63 + <td>✅</td>
     64 + </tr>
     65 + <tr>
     66 + <td>Custom Functions</td>
     67 + <td>❌</td>
     68 + </tr>
     69 + <tr>
     70 + <td>Logical Models</td>
     71 + <td>✅</td>
     72 + </tr>
     73 + <tr>
     74 + <td>Native Queries</td>
     75 + <td>❌</td>
     76 + </tr>
     77 +</table>
     78 + 
     79 +### Queries
     80 + 
     81 +<table>
     82 + <tr>
     83 + <td>Feature</td>
     84 + <td>MongoDB</td>
     85 + </tr>
     86 + <tr>
     87 + <td>Simple</td>
     88 + <td>✅</td>
     89 + </tr>
     90 + <tr>
     91 + <td>Nested Object</td>
     92 + <td>✅</td>
     93 + </tr>
     94 + <tr>
     95 + <td>Aggregation</td>
     96 + <td>✅</td>
     97 + </tr>
     98 + <tr>
     99 + <td>Filter / Search</td>
     100 + <td>✅</td>
     101 + </tr>
     102 + <tr>
     103 + <td>Sort</td>
     104 + <td>✅</td>
     105 + </tr>
     106 + <tr>
     107 + <td>Distinct</td>
     108 + <td>✅</td>
     109 + </tr>
     110 + <tr>
     111 + <td>Paginate</td>
     112 + <td>✅</td>
     113 + </tr>
     114 + <tr>
     115 + <td>Multiple Arguments</td>
     116 + <td>✅</td>
     117 + </tr>
     118 + <tr>
     119 + <td>Multiple Queries</td>
     120 + <td>✅</td>
     121 + </tr>
     122 + <tr>
     123 + <td>Variables / Aliases / Fragments</td>
     124 + <td>✅</td>
     125 + </tr>
     126 +</table>
     127 + 
     128 +### Mutations
     129 + 
     130 +<table>
     131 + <tr>
     132 + <td>Feature</td>
     133 + <td>MongoDB</td>
     134 + </tr>
     135 + <tr>
     136 + <td>Insert</td>
     137 + <td>❌</td>
     138 + </tr>
     139 + <tr>
     140 + <td>Upsert</td>
     141 + <td>❌</td>
     142 + </tr>
     143 + <tr>
     144 + <td>Update</td>
     145 + <td>❌</td>
     146 + </tr>
     147 + <tr>
     148 + <td>Delete</td>
     149 + <td>❌</td>
     150 + </tr>
     151 + <tr>
     152 + <td>Multiple per Request</td>
     153 + <td>❌</td>
     154 + </tr>
     155 +</table>
     156 + 
     157 +### Subscriptions
     158 + 
     159 +<table>
     160 + <tr>
     161 + <td>Feature</td>
     162 + <td>MongoDB</td>
     163 + </tr>
     164 + <tr>
     165 + <td>Value of Field</td>
     166 + <td>❌</td>
     167 + </tr>
     168 + <tr>
     169 + <td>Updates to Rows</td>
     170 + <td>❌</td>
     171 + </tr>
     172 + <tr>
     173 + <td>Value of Derived Field</td>
     174 + <td>❌</td>
     175 + </tr>
     176 + <tr>
     177 + <td>Streaming Subscriptions</td>
     178 + <td>❌</td>
     179 + </tr>
     180 +</table>
     181 + 
     182 +### Event Triggers
     183 + 
     184 +<table>
     185 + <tr>
     186 + <td>Feature</td>
     187 + <td>MongoDB</td>
     188 + </tr>
     189 + <tr>
     190 + <td>INSERT</td>
     191 + <td>❌</td>
     192 + </tr>
     193 + <tr>
     194 + <td>UPDATE</td>
     195 + <td>❌</td>
     196 + </tr>
     197 + <tr>
     198 + <td>DELETE</td>
     199 + <td>❌</td>
     200 + </tr>
     201 + <tr>
     202 + <td>MANUAL</td>
     203 + <td>❌</td>
     204 + </tr>
     205 +</table>
     206 + 
     207 +</div>
     208 + 
    46 209  ## Managing data with the Hasura Console
    47 210   
    48 211  The Hasura Console is a web UI that allows you to manage your data and metadata. It is available at
    skipped 5 lines
    54 217   
    55 218  :::info Console support
    56 219   
    57  -We recommend using your preferred MongoDB client instead. The Hasura Console is designed to be a tool for managing
    58  -your GraphQL API, and not a full-fledged database management tool.
     220 +We recommend using your preferred MongoDB client instead. The Hasura Console is designed to be a tool for managing your
     221 +GraphQL API, and not a full-fledged database management tool.
    59 222   
    60 223  :::
    61 224   
    skipped 7 lines
    69 232  ## Know more
    70 233   
    71 234  - [Get started](/databases/mongodb/docker.mdx)
     235 + 
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/mysql/docker.mdx
    skipped 125 lines
    126 126  7b5b2ee70ece hasura/graphql-data-connector ... 1m ago Up 1m 5005/tcp ..
    127 127  ```
    128 128   
     129 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     130 +endpoint:
     131 + 
     132 +```bash
     133 +http://localhost:8081/api/v1/mysql/health
     134 +```
     135 + 
     136 +:::info Data connector
     137 + 
     138 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     139 + 
     140 +:::
     141 + 
    129 142  ### Step 4: Load the Hasura Console
    130 143   
    131 144  Open the Hasura Console by navigating to `http://localhost:8080/console`. You will need to input your admin secret key
    skipped 83 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/oracle/docker.mdx
    skipped 120 lines
    121 121  7b5b2ee70ece hasura/graphql-data-connector ... 1m ago Up 1m 5005/tcp ..
    122 122  ```
    123 123   
     124 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     125 +endpoint:
     126 + 
     127 +```bash
     128 +http://localhost:8081/api/v1/oracle/health
     129 +```
     130 + 
     131 +:::info Data connector
     132 + 
     133 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     134 + 
     135 +:::
     136 + 
    124 137  ### Step 4: Load the Hasura Console
    125 138   
    126 139  Open the Hasura Console by navigating to `http://localhost:8080/console`. You will need to input your admin secret key
    skipped 81 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/redshift/getting-started/docker.mdx
    skipped 33 lines
    34 34  the Amazon Redshift GraphQL Connector agent. By navigating to the Hasura Console after execution, you'll find the Amazon
    35 35  Redshift data source as a type that can now be added to your Hasura GraphQL Service instance.
    36 36   
     37 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     38 +endpoint:
     39 + 
     40 +```bash
     41 +http://localhost:8081/api/v1/redshift/health
     42 +```
     43 + 
     44 +:::info Data connector
     45 + 
     46 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     47 + 
     48 +:::
     49 + 
    37 50  ## Keep up to date
    38 51   
    39 52  :::info Note
    skipped 14 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/redshift/index.mdx
    skipped 29 lines
    30 30   
    31 31  1. [Hasura Cloud](/databases/redshift/getting-started/cloud.mdx): You'll need to be able to access your Amazon Redshift
    32 32   instance service from Hasura Cloud.
    33  -2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon Redshift
    34  - instance to Hasura.
     33 +2. [Docker](/databases/redshift/getting-started/docker.mdx): Run Hasura with Docker and then connect your Amazon
     34 + Redshift instance to Hasura.
     35 + 
     36 +## Feature Support
     37 + 
     38 +<div className="feature-matrix-tables">
     39 + 
     40 +<table>
     41 + <tr>
     42 + <td>Feature</td>
     43 + <td>Redshift</td>
     44 + </tr>
     45 + <tr>
     46 + <td>Remote Relationships</td>
     47 + <td>✅</td>
     48 + </tr>
     49 + <tr>
     50 + <td>Views</td>
     51 + <td>✅</td>
     52 + </tr>
     53 +</table>
     54 + 
     55 +### Queries
     56 + 
     57 +<table>
     58 + <tr>
     59 + <td>Feature</td>
     60 + <td>Redshift</td>
     61 + </tr>
     62 + <tr>
     63 + <td>Simple</td>
     64 + <td>✅</td>
     65 + </tr>
     66 + <tr>
     67 + <td>Nested Object</td>
     68 + <td>✅</td>
     69 + </tr>
     70 + <tr>
     71 + <td>Aggregation</td>
     72 + <td>✅</td>
     73 + </tr>
     74 + <tr>
     75 + <td>Filter / Search</td>
     76 + <td>✅</td>
     77 + </tr>
     78 + <tr>
     79 + <td>Sort</td>
     80 + <td>✅</td>
     81 + </tr>
     82 + <tr>
     83 + <td>Distinct</td>
     84 + <td>✅ (supported for aggregations only)</td>
     85 + </tr>
     86 + <tr>
     87 + <td>Paginate</td>
     88 + <td>✅</td>
     89 + </tr>
     90 + <tr>
     91 + <td>Multiple Arguments</td>
     92 + <td>✅</td>
     93 + </tr>
     94 + <tr>
     95 + <td>Multiple Queries</td>
     96 + <td>✅</td>
     97 + </tr>
     98 + <tr>
     99 + <td>Variables / Aliases / Fragments</td>
     100 + <td>✅</td>
     101 + </tr>
     102 +</table>
     103 + 
     104 +### Mutations
     105 + 
     106 +<table>
     107 + <tr>
     108 + <td>Feature</td>
     109 + <td>Redshift</td>
     110 + </tr>
     111 + <tr>
     112 + <td>Insert</td>
     113 + <td>❌</td>
     114 + </tr>
     115 + <tr>
     116 + <td>Upsert</td>
     117 + <td>❌</td>
     118 + </tr>
     119 + <tr>
     120 + <td>Update</td>
     121 + <td>❌</td>
     122 + </tr>
     123 + <tr>
     124 + <td>Delete</td>
     125 + <td>❌</td>
     126 + </tr>
     127 + <tr>
     128 + <td>Multiple per Request</td>
     129 + <td>❌</td>
     130 + </tr>
     131 +</table>
    35 132   
    36  -## Supported features
     133 +### Subscriptions
    37 134   
    38  -:::info Note
     135 +<table>
     136 + <tr>
     137 + <td>Feature</td>
     138 + <td>Redshift</td>
     139 + </tr>
     140 + <tr>
     141 + <td>Value of Field</td>
     142 + <td>❌</td>
     143 + </tr>
     144 + <tr>
     145 + <td>Updates to Rows</td>
     146 + <td>❌</td>
     147 + </tr>
     148 + <tr>
     149 + <td>Value of Derived Field</td>
     150 + <td>❌</td>
     151 + </tr>
     152 + <tr>
     153 + <td>Streaming Subscriptions</td>
     154 + <td>❌</td>
     155 + </tr>
     156 +</table>
    39 157   
    40  -Currently, Hasura supports read-only queries, relationships, and permissions on Amazon Redshift.
     158 +### Event Triggers
     159 + 
     160 +<table>
     161 + <tr>
     162 + <td>Feature</td>
     163 + <td>Redshift</td>
     164 + </tr>
     165 + <tr>
     166 + <td>INSERT</td>
     167 + <td>❌</td>
     168 + </tr>
     169 + <tr>
     170 + <td>UPDATE</td>
     171 + <td>❌</td>
     172 + </tr>
     173 + <tr>
     174 + <td>DELETE</td>
     175 + <td>❌</td>
     176 + </tr>
     177 + <tr>
     178 + <td>MANUAL</td>
     179 + <td>❌</td>
     180 + </tr>
     181 +</table>
    41 182   
    42  -:::
     183 +</div>
    43 184   
    44 185  ## Managing data with the Hasura Console
    45 186   
    skipped 6 lines
    52 193   
    53 194  :::info Console support
    54 195   
    55  -We recommend using your preferred Amazon Redshift client instead. The Hasura Console is designed to be a tool for managing
    56  -your GraphQL API, and not a full-fledged database management tool.
     196 +We recommend using your preferred Amazon Redshift client instead. The Hasura Console is designed to be a tool for
     197 +managing your GraphQL API, and not a full-fledged database management tool.
    57 198   
    58 199  :::
    59 200   
    skipped 12 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/databases/snowflake/getting-started/docker.mdx
    skipped 33 lines
    34 34  the Snowflake GraphQL Connector agent. By navigating to the Hasura Console after execution, you'll find the Snowflake
    35 35  data source as a type that can now be added to your Hasura GraphQL Service instance.
    36 36   
     37 +You can check the health of the connector using the `/health` endpoint. The connector is available at the following
     38 +endpoint:
     39 + 
     40 +```bash
     41 +http://localhost:8081/api/v1/snowflake/health
     42 +```
     43 + 
     44 +:::info Data connector
     45 + 
     46 +This data source utilizes the `hasura/graphql-data-connector` to connect your data source to the GraphQL engine.
     47 + 
     48 +:::
     49 + 
    37 50  ### Snowflake Connector Configuration
    38 51   
    39 52  You can directly add your JDBC connection string to the Snowflake Connector agent in the Hasura Console, or you can add
    skipped 48 lines
  • ■ ■ ■ ■ ■
    docs/docs/databases/snowflake/index.mdx
    skipped 50 lines
    51 51   <td>✅</td>
    52 52   </tr>
    53 53   <tr>
    54  - <td>Default Values</td>
    55  - <td>✅</td>
    56  - </tr>
    57  - <tr>
    58 54   <td>Custom Functions</td>
    59 55   <td>✅</td>
    60 56   </tr>
    skipped 36 lines
    97 93   </tr>
    98 94   <tr>
    99 95   <td>Distinct</td>
    100  - <td>✅</td>
     96 + <td>✅ (supported for aggregations only)</td>
    101 97   </tr>
    102 98   <tr>
    103 99   <td>Paginate</td>
    skipped 131 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/enable-https.mdx
    1  ----
    2  -description: Secure the Hasura GraphQL endpoint
    3  -keywords:
    4  - - hasura
    5  - - docs
    6  - - deployment
    7  - - https
    8  -sidebar_position: 80
    9  ----
    10  - 
    11  -# Enable HTTPS
    12  - 
    13  -## Setting up HTTPS
    14  - 
    15  -Hasura GraphQL Engine does not handle SSL/TLS for your API. That means,
    16  -Hasura GraphQL Engine cannot serve your API on an HTTPS URL.
    17  - 
    18  -You should use a reverse proxy (like Nginx, Caddy, Kong, Traefik etc.)
    19  -or the cloud provider's native load balancer SSL termination features to
    20  -secure your API.
    21  - 
    22  -## Sample configurations
    23  - 
    24  -Here are a few sample configurations for some popular proxies:
    25  - 
    26  -### [Nginx](https://nginx.org/en/docs/)
    27  - 
    28  -Here is a sample `nginx.conf` to proxy requests to Hasura:
    29  - 
    30  -```nginx
    31  -server {
    32  - listen 80;
    33  - listen 443 ssl;
    34  - server_name hasura.<my-domain.com>;
    35  - 
    36  - location / {
    37  - proxy_pass http://localhost:8080/;
    38  - proxy_http_version 1.1;
    39  - proxy_set_header Upgrade $http_upgrade;
    40  - proxy_set_header Connection "upgrade";
    41  - }
    42  -}
    43  -```
    44  - 
    45  -Please note that setting up SSL is not covered in this guide. You can
    46  -find more information at [Nginx docs](https://nginx.org/en/docs/http/configuring_https_servers.html).
    47  - 
    48  -To serve Hasura with a URL prefix instead of a separate subdomain, use `location /hasura/` or similar.
    49  - 
    50  -### [Caddy](https://caddyserver.com/)
    51  - 
    52  -Here is a sample `Caddyfile` to proxy requests to Hasura:
    53  - 
    54  -```bash
    55  -hasura.<my-domain.com> {
    56  - reverse_proxy localhost:8080
    57  -}
    58  -```
    59  - 
    60  -Caddy has TLS provisioning built-in with Let's Encrypt or ZeroSSL. You can find the docs at [Caddy website](https://caddyserver.com/docs/automatic-https).
    61  - 
    62  -In order to serve at a URL prefix, use the following configuration:
    63  - 
    64  -```bash
    65  -<my-domain.com> {
    66  - handle_path /hasura* {
    67  - reverse_proxy localhost:8080
    68  - }
    69  - 
    70  - handle {
    71  - # Fallback for otherwise unhandled requests
    72  - }
    73  -}
    74  -```
    75  - 
    76  -### [Traefik](https://doc.traefik.io/traefik/)
    77  - 
    78  -Here are sample `traefik.toml` and `traefik-dynamic.toml` files to proxy requests to Hasura:
    79  - 
    80  -```toml
    81  -#traefik.toml
    82  - 
    83  -[providers]
    84  - [providers.file]
    85  - filename = "traefik-dynamic.toml"
    86  - 
    87  -[api]
    88  - dashboard = true
    89  - debug = true
    90  - 
    91  -[entryPoints]
    92  - [entryPoints.web]
    93  - address = ":80"
    94  - 
    95  - [entryPoints.web.http]
    96  - [entryPoints.web.http.redirections]
    97  - [entryPoints.web.http.redirections.entryPoint]
    98  - to = "web-secure"
    99  - scheme = "https"
    100  - 
    101  - [entryPoints.web-secure]
    102  - address = ":443"
    103  - 
    104  -[certificatesResolvers.sample.acme]
    105  - email = "[email protected]"
    106  - storage = "acme.json"
    107  - 
    108  - [certificatesResolvers.sample.acme.httpChallenge]
    109  - # used during the challenge
    110  - entryPoint = "web"
    111  -```
    112  - 
    113  -```toml
    114  -#traefik-dynamic.toml
    115  - 
    116  -[http]
    117  - [http.routers]
    118  - [http.routers.my-router]
    119  - rule = "Host(`hasura.example.com`)"
    120  - service = "hasura"
    121  - entryPoints = ["web-secure"]
    122  - [http.routers.my-router.tls]
    123  - certResolver = "sample"
    124  - 
    125  - [http.services]
    126  - [http.services.hasura.loadbalancer]
    127  - [[http.services.hasura.loadbalancer.servers]]
    128  - url = "http://127.0.0.1:5000"
    129  -```
    130  - 
    131  -In order to serve at a URL prefix, use the following configuration:
    132  - 
    133  -```toml
    134  -#traefik-dynamic.toml
    135  -...
    136  - 
    137  - [http.routers]
    138  - [http.routers.my-router]
    139  - rule = "Host(`example.com`) && Path(`/hasura`))"
    140  - service = "hasura"
    141  - entryPoints = ["web-secure"]
    142  - [http.routers.my-router.tls]
    143  - certResolver = "sample"
    144  - 
    145  -...
    146  -```
    147  - 
    148  -Please note that setting up SSL is not covered in this guide. You can
    149  -find more information at the [Traefik docs](https://doc.traefik.io/traefik/https/overview).
    150  - 
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/graphql-engine-flags/reference.mdx
    skipped 234 lines
    235 235  | **Default** | `true` |
    236 236  | **Supported in** | CE, Enterprise Edition, Cloud |
    237 237   
     238 +### Configured Header Precedence
     239 + 
     240 +Whether the metadata configured headers are given higher precedence than client headers for
     241 +[Actions](actions/action-handlers.mdx/#add-a-header-to-your-action) and
     242 +[postgres input validations](schema/postgres/input-validations.mdx/#request).
     243 + 
     244 +| | |
     245 +| ------------------- | --------------------------------------------- |
     246 +| **Flag** | `--configured-header-precedence` |
     247 +| **Env var** | `HASURA_GRAPHQL_CONFIGURED_HEADER_PRECEDENCE` |
     248 +| **Accepted values** | Boolean |
     249 +| **Options** | `true` or `false` |
     250 +| **Default** | `false` |
     251 +| **Supported in** | CE, Enterprise Edition, Cloud |
     252 + 
    238 253  ### Connections per Read-Replica
    239 254   
    240 255  The maximum number of Postgres connections per [read-replica](databases/database-config/read-replicas.mdx) that can be
    skipped 310 lines
    551 566  | **Env var** | `HASURA_GRAPHQL_EVENTS_FETCH_INTERVAL` |
    552 567  | **Accepted values** | Integer |
    553 568  | **Default** | `null` |
    554  -| **Supported in** | CE, Enterprise Edition, Cloud |
     569 +| **Supported in** | CE, Enterprise Edition |
    555 570   
    556 571  ### Experimental Features
    557 572   
    skipped 419 lines
    977 992  | **Default** | `null` |
    978 993  | **Example** | `redis://username:password@host:port/db` |
    979 994  | **Supported in** | Enterprise Edition only |
     995 + 
     996 +### Remote Schema prioritize data
     997 + 
     998 +Setting this will prioritize `data` or `errors` if both fields are present in the Remote Schema response.
     999 + 
     1000 +| | |
     1001 +| ------------------- | ---------------------------------------------- |
     1002 +| **Flag** | `--remote-schema-prioritize-data` |
     1003 +| **Env var** | `HASURA_GRAPHQL_REMOTE_SCHEMA_PRIORITIZE_DATA` |
     1004 +| **Accepted values** | Boolean |
     1005 +| **Options** | `true` or `false` |
     1006 +| **Default** | `false` |
     1007 +| **Supported in** | CE, Enterprise Edition, Cloud |
    980 1008   
    981 1009  ### Schema Sync Poll Interval
    982 1010   
    skipped 259 lines
  • ■ ■ ■ ■
    docs/docs/deployment/production-checklist.mdx
    skipped 167 lines
    168 168   
    169 169  Production APIs should be served over HTTPS to be secure over the network.
    170 170   
    171  -See [Enable HTTPS](/deployment/enable-https.mdx) for details on achieving this.
     171 +See [Enable HTTPS](/deployment/serve-behind-proxy.mdx) for details on achieving this.
    172 172   
    173 173  ## Configure a load balancer
    174 174   
    skipped 34 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/deployment/serve-behind-proxy.mdx
     1 +---
     2 +description:
     3 + 'Learn how to configure Hasura to run behind a proxy for enhanced security and efficient API management. This guide
     4 + covers setup for Nginx, Caddy, Kong, and Traefik proxies.'
     5 +keywords:
     6 + - hasura configuration
     7 + - proxy setup
     8 + - graphql engine
     9 + - ssl termination
     10 + - nginx hasura
     11 + - caddy proxy
     12 + - kong reverse proxy
     13 + - traefik configuration
     14 +sidebar_position: 80
     15 +seoFrontMatterUpdated: true
     16 +---
     17 + 
     18 +# Running Behind a Proxy
     19 + 
     20 +## Introduction
     21 + 
     22 +In environments where direct internet access is restricted or for enhanced security measures, running Hasura GraphQL
     23 +Engine behind a proxy is often a necessity. This approach is essential for enterprises that control and monitor internet
     24 +traffic through a proxy server. By configuring Hasura to run behind a proxy, you can manage and secure access to your
     25 +API efficiently.
     26 + 
     27 +While Hasura GraphQL Engine itself does not handle proxy settings, it can be configured to work seamlessly behind
     28 +various popular proxy servers. This guide illustrates several examples for setting up a reverse proxy, such as Nginx,
     29 +Caddy, Kong, or Traefik, to handle requests to and from your Hasura GraphQL Engine. You can find more solution-specific
     30 +details on the proxy server's documentation.
     31 + 
     32 +By using a reverse proxy, you can enforce security policies, perform SSL termination, and manage traffic effectively,
     33 +ensuring that your Hasura API remains secure and accessible within your network infrastructure.
     34 + 
     35 +## Setting up a proxy
     36 + 
     37 +Configuring your Hasura GraphQL Engine to work behind a proxy involves setting up the proxy server to forward requests
     38 +to Hasura and, optionally, handle SSL/TLS termination. Here are some sample configurations for popular proxies that you
     39 +can use as a starting point:
     40 + 
     41 +### [Nginx](https://nginx.org/en/docs/)
     42 + 
     43 +Here is a sample `nginx.conf` to proxy requests to Hasura:
     44 + 
     45 +```nginx
     46 +server {
     47 + listen 80;
     48 + listen 443 ssl;
     49 + server_name hasura.<my-domain.com>;
     50 + 
     51 + location / {
     52 + proxy_pass http://localhost:8080/;
     53 + proxy_http_version 1.1;
     54 + proxy_set_header Upgrade $http_upgrade;
     55 + proxy_set_header Connection "upgrade";
     56 + }
     57 +}
     58 +```
     59 + 
     60 +The example above directs Nginx to listen on ports `80` and `443` for HTTP and HTTPS requests respectively on the
     61 +subdomain `hasura.<my-domain.com>`. The `proxy_pass` directive forwards requests to Hasura GraphQL Engine running on
     62 +port 8080.
     63 + 
     64 +:::info Server via a URL prefix
     65 + 
     66 +To serve Hasura with a URL prefix instead of a separate subdomain, use `location /hasura/` or similar.
     67 + 
     68 +:::
     69 + 
     70 +### [Caddy](https://caddyserver.com/)
     71 + 
     72 +Here is a sample `Caddyfile` to proxy requests to Hasura:
     73 + 
     74 +```bash
     75 +hasura.<my-domain.com> {
     76 + reverse_proxy localhost:8080
     77 +}
     78 +```
     79 + 
     80 +In order to serve at a URL prefix, use the following configuration:
     81 + 
     82 +```bash
     83 +<my-domain.com> {
     84 + handle_path /hasura* {
     85 + reverse_proxy localhost:8080
     86 + }
     87 +}
     88 +```
     89 + 
     90 +### [Traefik](https://doc.traefik.io/traefik/)
     91 + 
     92 +Here are sample `traefik.toml` and `traefik-dynamic.toml` files to proxy requests to Hasura:
     93 + 
     94 +```toml
     95 +#traefik.toml
     96 + 
     97 +[providers]
     98 + [providers.file]
     99 + filename = "traefik-dynamic.toml"
     100 + 
     101 +[api]
     102 + dashboard = true
     103 + debug = true
     104 + 
     105 +[entryPoints]
     106 + [entryPoints.web]
     107 + address = ":80"
     108 + 
     109 + [entryPoints.web.http]
     110 + [entryPoints.web.http.redirections]
     111 + [entryPoints.web.http.redirections.entryPoint]
     112 + to = "web-secure"
     113 + scheme = "https"
     114 + 
     115 + [entryPoints.web-secure]
     116 + address = ":443"
     117 + 
     118 +[certificatesResolvers.sample.acme]
     119 + email = "[email protected]"
     120 + storage = "acme.json"
     121 + 
     122 + [certificatesResolvers.sample.acme.httpChallenge]
     123 + # used during the challenge
     124 + entryPoint = "web"
     125 +```
     126 + 
     127 +```toml
     128 +#traefik-dynamic.toml
     129 + 
     130 +[http]
     131 + [http.routers]
     132 + [http.routers.my-router]
     133 + rule = "Host(`hasura.example.com`)"
     134 + service = "hasura"
     135 + entryPoints = ["web-secure"]
     136 + [http.routers.my-router.tls]
     137 + certResolver = "sample"
     138 + 
     139 + [http.services]
     140 + [http.services.hasura.loadbalancer]
     141 + [[http.services.hasura.loadbalancer.servers]]
     142 + url = "http://127.0.0.1:5000"
     143 +```
     144 + 
     145 +In order to serve at a URL prefix, use the following configuration:
     146 + 
     147 +```toml
     148 +#traefik-dynamic.toml
     149 +...
     150 + 
     151 + [http.routers]
     152 + [http.routers.my-router]
     153 + rule = "Host(`example.com`) && Path(`/hasura`))"
     154 + service = "hasura"
     155 + entryPoints = ["web-secure"]
     156 + [http.routers.my-router.tls]
     157 + certResolver = "sample"
     158 + 
     159 +...
     160 +```
     161 + 
  • ■ ■ ■ ■ ■ ■
    docs/docs/hasura-cli/install-hasura-cli.mdx
    skipped 45 lines
    46 46  You can also install a specific version of the CLI by providing the `VERSION` variable:
    47 47   
    48 48  ```bash
    49  -curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.36.0 bash
     49 +curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
    50 50  ```
    51 51   
    52 52  </TabItem>
    skipped 18 lines
    71 71  You can also install a specific version of the CLI by providing the `VERSION` variable:
    72 72   
    73 73  ```bash
    74  -curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.36.0 bash
     74 +curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | VERSION=v2.37.0 bash
    75 75  ```
    76 76   
    77 77  </TabItem>
    skipped 46 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/index.mdx
    skipped 34 lines
    35 35   <div className="sub-heading">
    36 36   <div className="front-matter">
    37 37   <p>
    38  - The Hasura GraphQL Engine makes your data instantly accessible over a GraphQL API, so you can build and ship
    39  - modern, performant apps and APIs 10x faster. Hasura connects to your databases, REST and GraphQL endpoints, and
    40  - third party APIs to provide a unified, connected, real-time, secured GraphQL API for all of your data.
     38 + The{' '}
     39 + <Link to="https://github.com/hasura/graphql-engine" target="_blank">
     40 + <b>open-source Hasura GraphQL Engine</b>
     41 + </Link>{' '}
     42 + makes your data instantly accessible over a GraphQL API, so you can build and ship modern, performant apps and
     43 + APIs 10x faster. Hasura connects to your databases, REST and GraphQL endpoints, and third party APIs to provide
     44 + a unified, connected, real-time, secured GraphQL API for all of your data. You can deploy Hasura manually using
     45 + our{' '}
     46 + <Link to="https://hub.docker.com/r/hasura/graphql-engine-base/tags" target="_blank">
     47 + Community Edition Docker image
     48 + </Link>{' '}
     49 + which includes all the core features of GraphQL Engine.
    41 50   <br />
    42 51   <br />
    43 52   <Link to="https://cloud.hasura.io/signup/?pg=docs&plcmt=body&cta=hasura-cloud&tech=default" target="_blank">
    44 53   <b>Hasura Cloud</b>
    45 54   </Link> empowers you to create highly optimized, managed and massively scalable Hasura instances in seconds and includes
    46  - extra reliability, monitoring, caching, tracing, security and deployment features. You can also deploy Hasura manually
    47  - using our Community Edition Docker image which includes all the core features of GraphQL Engine.
     55 + extra reliability, monitoring, caching, tracing, security and deployment features.
    48 56   <br />
    49 57   <br />
    50 58   <VersionedLink to="/enterprise/overview">
    skipped 89 lines
  • ■ ■ ■ ■ ■
    docs/docs/migrations-metadata-seeds/manage-metadata.mdx
    skipped 379 lines
    380 380  ## Metadata Inconsistencies {#metadata-inconsistency}
    381 381   
    382 382  Metadata should always be consistent against the underlying database schemas. When it's not, Hasura will mark the
    383  -Metadata objects as `inconsistent`.
    384  - 
    385  -<Tabs groupId="user-preference" className="api-tabs">
    386  - 
    387  -<TabItem value="cli" label="CLI">
    388  - 
    389  -The status of Metadata inconsistency can be checked with the
    390  -[hasura metadata inconsistency](/hasura-cli/commands/hasura_metadata_inconsistency_status.mdx) command.
    391  - 
    392  -```bash
    393  -hasura metadata inconsistency status
    394  -```
    395  - 
    396  -CLI will log:
    397  - 
    398  -```text
    399  -INFO metadata is consistent
    400  -```
    401  - 
    402  -If there are inconsistent objects they can be listed with:
    403  - 
    404  -```bash
    405  -hasura metadata inconsistency list
    406  -```
    407  - 
    408  -CLI will log, for example:
    409  - 
    410  -```text
    411  -NAME TYPE DESCRIPTION REASON
    412  - 
    413  -author table {"name":"author","schema":"public"}... Inconsistent object: no such table/view exists in source: "author"
    414  -```
    415  - 
    416  -You can then manually address each of the inconsistencies or, if necessary, drop them **all** with the command:
    417  - 
    418  -```bash
    419  -hasura metadata inconsistency drop
    420  -```
    421  - 
    422  -CLI will log:
    423  - 
    424  -```text
    425  -INFO all inconsistent objects removed from metadata
    426  -```
    427  - 
    428  -</TabItem>
    429  - 
    430  -<TabItem value="console" label="Console">
    431  - 
    432  -1. Click on the settings ⚙ icon at the top right corner of the Console screen.
    433  - 
    434  - <Thumbnail
    435  - alt="Settings navigation button"
    436  - src="/img/migrations-metadata-seeds/settings-navigation_console_2-7-0.png"
    437  - />
    438  - 
    439  -2. Click on `Reset` button.
    440  - 
    441  - <Thumbnail alt="reset Metadata" src="/img/migrations-metadata-seeds/metadata-reset.png" width="850px" />
    442  - 
    443  -3. A pop-up will appear prompting you to confirm the process.
    444  - 
    445  -4. A notification should appear indicating success.
    446  - 
    447  -</TabItem>
    448  - 
    449  -<TabItem value="api" label="API">
    450  - 
    451  -The clearing of Metadata can be done via the
    452  -[clear_metadata](/api-reference/metadata-api/manage-metadata.mdx#metadata-clear-metadata) Metadata API.
    453  - 
    454  -Here is an example using `curl`:
    455  - 
    456  -```bash
    457  -curl -d'{"type": "clear_metadata", "args": {}}' http://localhost:8080/v1/metadata
    458  -```
    459  - 
    460  -If an admin secret is set, add `-H 'X-Hasura-Admin-Secret: [your-admin-secret]'` as the API is an admin-only API.
    461  - 
    462  -</TabItem>
    463  - 
    464  -</Tabs>
     383 +Metadata objects as `inconsistent`. You can learn more about Metadata inconsistencies on
     384 +[this page](/migrations-metadata-seeds/resolving-metadata-inconsistencies.mdx).
    465 385   
    466 386  ## Diff Metadata {#diff-metadata}
    467 387   
    skipped 21 lines
  • ■ ■ ■ ■
    docs/docs/migrations-metadata-seeds/metadata-best-practices.mdx
    skipped 4 lines
    5 5   - docs
    6 6   - best practices
    7 7  sidebar_label: Metadata Best Practices
    8  -sidebar_position: 11
     8 +sidebar_position: 12
    9 9  ---
    10 10   
    11 11  # Metadata Best Practices
    skipped 84 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/migrations-metadata-seeds/resolving-metadata-inconsistencies.mdx
     1 +---
     2 +description:
     3 + "Resolve Hasura metadata inconsistencies. This guide aids in aligning your GraphQL API's structure with your database,
     4 + elimiinating inconsistencies and ensuring seamless and efficient operation."
     5 +keywords:
     6 + - hasura metadata
     7 + - graphql inconsistencies
     8 + - database schema
     9 + - resolving inconsistencies
     10 + - hasura graphql engine
     11 + - metadata management
     12 + - hasura cli
     13 + - hasura console
     14 + - metadata api
     15 + - metadata troubleshooting
     16 +sidebar_label: Resolving Metadata Inconsistencies
     17 +sidebar_position: 11
     18 +seoFrontMatterUpdated: true
     19 +---
     20 + 
     21 +import Tabs from '@theme/Tabs';
     22 +import TabItem from '@theme/TabItem';
     23 +import Thumbnail from '@site/src/components/Thumbnail';
     24 + 
     25 +# Resolving Metadata Inconsistencies
     26 + 
     27 +## Introduction
     28 + 
     29 +Typically, when you have a [metadata](/migrations-metadata-seeds/manage-metadata.mdx) inconsistency, your Hasura
     30 +metadata — which is the information describing the structure and configuration of your Hasura GraphQL API — is not
     31 +aligned with the actual state of the underlying database or the Hasura configuration. This can happen for several
     32 +reasons:
     33 + 
     34 +1. **Schema changes in a data source:** If you make changes directly to the database schema (like adding, modifying, or
     35 + dropping tables or columns) without updating the Hasura metadata, it can lead to inconsistencies. Hasura expects the
     36 + database schema to accurately reflect the metadata it has on it.
     37 +2. **Manual changes to metadata:** If you manually modify Hasura's metadata files (like **`tables.yaml`**,
     38 + **`relationships.yaml`**, etc.) and these changes don't correspond to the actual database schema or Hasura settings,
     39 + it can result in inconsistencies.
     40 +3. **Failed migrations or operations:** Sometimes, failed migrations or incomplete operations (due to network issues,
     41 + errors in execution, etc.) can leave the metadata in an inconsistent state.
     42 +4. **Conflicts in Remote Schemas**: If you have Remote Schemas that are not properly configured, change without updating
     43 + the metadata or conflict with existing configurations, it may lead to metadata inconsistencies.
     44 + 
     45 +## Resolving metadata inconsistencies
     46 + 
     47 +You can use the CLI, Console or API to resolve metadata inconsistencies. Regardless of the method you choose, you'll
     48 +have two options:
     49 + 
     50 +**Reloading metadata**: This action is used when you make changes to your database schema outside of the Hasura Console,
     51 +such as adding a new table or modifying an existing one directly in your database. This ensures that Hasura's GraphQL
     52 +engine is aware of the latest structure of your database and can accurately reflect these changes in the GraphQL API it
     53 +generates. **It does not modify or reset any existing metadata configurations but simply updates Hasura's understanding
     54 +of the current database schema by introspecting it again and updating the metadata accordingly.**
     55 + 
     56 +**Resetting metadata**: clears all the existing metadata configurations in Hasura. This includes relationships,
     57 +permissions, and any manual configurations you've made using the Console. After resetting the metadata, you will need to
     58 +reconfigure these settings. **It's useful when you want to start fresh with Hasura's setup or if there are irreparable
     59 +inconsistencies in your current metadata setup.**
     60 + 
     61 +<Tabs groupId="user-preference" className="api-tabs">
     62 + 
     63 +<TabItem value="cli" label="CLI">
     64 + 
     65 +The status of Metadata inconsistency can be checked with the
     66 +[hasura metadata inconsistency](/hasura-cli/commands/hasura_metadata_inconsistency_status.mdx) command.
     67 + 
     68 +```bash
     69 +hasura metadata inconsistency status
     70 +```
     71 + 
     72 +The CLI will log:
     73 + 
     74 +```text
     75 +INFO metadata is consistent
     76 +```
     77 + 
     78 +You can attempt to [reload the metadata](/hasura-cli/commands/hasura_metadata_reload.mdx) with the command:
     79 + 
     80 +```bash
     81 +hasura metadata reload
     82 +```
     83 + 
     84 +If there are inconsistent objects they can be listed with:
     85 + 
     86 +```bash
     87 +hasura metadata inconsistency list
     88 +```
     89 + 
     90 +For example, the CLI will log:
     91 + 
     92 +```text
     93 +NAME TYPE DESCRIPTION REASON
     94 + 
     95 +author table {"name":"author","schema":"public"}... Inconsistent object: no such table/view exists in source: "author"
     96 +```
     97 + 
     98 +You can then manually address each of the inconsistencies or, if necessary, drop them **all** with the command:
     99 + 
     100 +```bash
     101 +hasura metadata inconsistency drop
     102 +```
     103 + 
     104 +CLI will log:
     105 + 
     106 +```text
     107 +INFO all inconsistent objects removed from metadata
     108 +```
     109 + 
     110 +</TabItem>
     111 + 
     112 +<TabItem value="console" label="Console">
     113 + 
     114 +1. Click `Settings` in the top-right corner of any Console page:
     115 + 
     116 + <Thumbnail alt="reset Metadata" src="/img/migrations-metadata-seeds/metadata-reset.png" width="850px" />
     117 + 
     118 +:::info Confirm reset metadata
     119 + 
     120 +If you choose this option, an alert will pop up asking you to confirm your choice.
     121 + 
     122 +:::
     123 + 
     124 +</TabItem>
     125 + 
     126 +<TabItem value="api" label="API">
     127 + 
     128 +You can attempt to reload the metadata with the
     129 +[reload_metadata](/api-reference/metadata-api/manage-metadata.mdx#metadata-reload-metadata) Metadata API:
     130 + 
     131 +```bash
     132 +POST /v1/metadata HTTP/1.1
     133 +Content-Type: application/json
     134 +X-Hasura-Role: admin
     135 + 
     136 +{
     137 + "type" : "reload_metadata",
     138 + "args": {
     139 + "reload_remote_schemas": true,
     140 + "reload_sources": false,
     141 + "recreate_event_triggers": true
     142 + }
     143 +}
     144 +```
     145 + 
     146 +The clearing of Metadata can be done via the
     147 +[clear_metadata](/api-reference/metadata-api/manage-metadata.mdx#metadata-clear-metadata) Metadata API:
     148 + 
     149 +```bash
     150 +curl -d'{"type": "clear_metadata", "args": {}}' http://localhost:8080/v1/metadata
     151 +```
     152 + 
     153 +If an admin secret is set, add `-H 'X-Hasura-Admin-Secret: [your-admin-secret]'` as the API is an admin-only API.
     154 + 
     155 +</TabItem>
     156 + 
     157 +</Tabs>
     158 + 
     159 +## Next steps
     160 + 
     161 +A metadata inconsistency can be a frightening and frustrating thing to encounter. To ensure it doesn't happen again,
     162 +consider using [migrations](/migrations-metadata-seeds/manage-migrations.mdx) to make changes to your database schema
     163 +and Hasura metadata. Migrations are a safe and reliable way to make changes to your database schema and metadata.
     164 + 
     165 +If you're looking for an end-to-end example, check out our
     166 +[quickstart guide](/migrations-metadata-seeds/migrations-metadata-setup.mdx).
     167 + 
  • ■ ■ ■ ■ ■
    docs/docs/policies/versioning.mdx
    skipped 57 lines
    58 58  | LTS version | EOL Date |
    59 59  | ----------- | ----------- |
    60 60  | `v2.11` | Sep-01-2024 |
     61 +| `v2.36` | Dec-12-2025 |
    61 62   
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/bigquery/computed-fields.mdx
    skipped 17 lines
    18 18  ## What are computed fields?
    19 19   
    20 20  Computed fields are virtual fields that are dynamically computed and can be queried along with a table's columns.
    21  -Computed fields are computed upon request. They are computed by executing user-defined SQL functions using other columns
    22  -of the table and other custom inputs if needed.
    23 21   
    24  -:::info Note
     22 +Computed fields are computed upon request. They are computed by executing user-defined SQL functions which take columns
     23 +of the table, and other custom values if needed, as inputs to compute the field.
     24 + 
     25 +:::info Computed fields do not modify the database schema
    25 26   
    26  -Computed fields are only exposed over the GraphQL API and the BigQuery database schema is not modified on addition of a
    27  -computed field.
     27 +Computed fields are only exposed over the GraphQL API and the database schema is not modified on addition of a computed
     28 +field. i.e. a computed field cannot be fetched as a table column directly from the database.
    28 29   
    29 30  :::
    30 31   
    skipped 146 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/bigquery/views.mdx
    skipped 16 lines
    17 17   
    18 18  ## What are views?
    19 19   
    20  -[Views][] can be used to expose the results of a custom query as a virtual table. Views are not persisted physically
    21  -i.e. the query defining a view is executed whenever data is requested from the view.
     20 +BigQuery [Views][] can be used to expose the results of a custom query as a virtual table. Views are not persisted
     21 +physically i.e. the query defining a view is executed whenever data is requested from the view.
    22 22   
    23 23  Hasura GraphQL Engine lets you expose views over the GraphQL API to allow querying them using both `queries` and
    24 24  `subscriptions` just like regular tables.
    skipped 159 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/common-patterns/export-graphql-schema.mdx
    skipped 29 lines
    30 30  gq https://my-graphql-engine.com/v1/graphql --introspect > schema.graphql
    31 31   
    32 32  # If Hasura GraphQL Engine is running with an admin secret
    33  -gq https://my-graphql-engine.com/v1/graphql -H "X-Hasura-Admin-Secret: adminsecretkey" --introspect > schema.graphql
     33 +gq https://my-graphql-engine.com/v1/graphql -H 'X-Hasura-Admin-Secret: adminsecretkey' --introspect > schema.graphql
    34 34  ```
    35 35   
    36 36  By default, it downloads the schema in `.graphql` format. If you want it in JSON format, you can use an additional flag
    skipped 17 lines
    54 54  apollo schema:download --endpoint https://my-graphql-engine.com/v1/graphql
    55 55   
    56 56  # If Hasura GraphQL Engine is running with an admin secret
    57  -apollo schema:download --endpoint https://my-graphql-engine.com/v1/graphql --header "X-Hasura-Admin-Secret: adminsecretkey"
     57 +apollo schema:download --endpoint https://my-graphql-engine.com/v1/graphql --header 'X-Hasura-Admin-Secret: adminsecretkey'
    58 58  ```
    59 59   
    60 60  Note that `apollo schema:download` is an alias of the command
    skipped 4 lines
  • ■ ■ ■ ■
    docs/docs/schema/ms-sql-server/stored-procedures.mdx
    skipped 36 lines
    37 37  SQL Server stored procedures are built-in or user-defined Transact-SQL statements that can be used to encapsulate some
    38 38  custom business logic or extend the built-in SQL functions and operators.
    39 39   
    40  -stored procedures support is a Cloud and Enterprise feature of Hasura.
     40 +_Stored procedures support is a Cloud and Enterprise feature of Hasura._
    41 41   
    42 42  :::info Supported features
    43 43   
    skipped 411 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/postgres/computed-fields.mdx
    skipped 19 lines
    20 20  ## What are computed fields?
    21 21   
    22 22  Computed fields are virtual values or objects that are dynamically computed and can be queried along with a table/view's
    23  -columns. Computed fields are computed upon request. They are computed by executing
    24  -[custom SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html) (a.k.a. stored procedures) using
    25  -other columns of the table/view and other custom inputs if needed.
     23 +columns.
     24 + 
     25 +Computed fields are computed upon request. They are computed by executing
     26 +[user-defined SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html) (a.k.a. stored procedures)
     27 +which take the columns of the table/view, and other custom values if needed, as inputs to compute the field.
    26 28   
    27  -:::info Note
     29 +:::info Computed fields do not modify the database schema
    28 30   
    29 31  Computed fields are only exposed over the GraphQL API and the database schema is not modified on addition of a computed
    30  -field.
     32 +field. i.e. a computed field cannot be fetched as a table column directly from the database.
    31 33   
    32 34  :::
    33 35   
    skipped 4 lines
    38 40   
    39 41  - **Function behavior**: ONLY `STABLE` or `IMMUTABLE`
    40 42  - **Argument modes**: ONLY `IN`
    41  -- **Table Argument**: One input argument with a table row type
     43 +- **Table Argument**: ONE input argument as the table row type
    42 44  - **Return type**: Either `SETOF <table-name>` or `BASE` type
    43 45   
    44  -:::info Note
     46 +Functions used as computed fields can also accept other arguments other than the mandatory table row argument. Values
     47 +for these extra arguments can be passed as arguments to the computed field in the GraphQL API.
    45 48   
    46  -- Functions used as computed fields can also accept other arguments other than the mandatory table row argument. Values
    47  - for these extra arguments can be passed as arguments to the computed field in the GraphQL API.
    48  -- Functions used as computed fields do not need to be tracked by the Hasura GraphQL Engine.
     49 +:::info No need for tracking
     50 + 
     51 +Functions used as computed fields do not need to be [tracked](./custom-functions.mdx#pg-track-custom-sql-functions) by
     52 +the Hasura GraphQL Engine. As the requirements for tracking functions differ from those to add them as computed fields,
     53 +not all functions that can be used as computed fields can be tracked and vice versa.
    49 54   
    50 55  :::
    51 56   
    skipped 370 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/postgres/custom-functions.mdx
    skipped 15 lines
    16 16  import Tabs from '@theme/Tabs';
    17 17  import TabItem from '@theme/TabItem';
    18 18   
    19  -# Postgres: Extend Schema with SQL Functions
     19 +# Postgres: Extend Schema with Custom SQL Functions
    20 20   
    21  -## What are custom SQL functions?
     21 +## What are Custom functions?
    22 22   
    23  -Custom SQL functions are [user-defined SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html)
    24  -that can be used to either encapsulate some custom business logic or extend the built-in SQL functions and operators.
    25  -SQL functions are also referred to as **stored procedures**.
     23 +Postgres [user-defined SQL functions](https://www.postgresql.org/docs/current/sql-createfunction.html) can be used to
     24 +either encapsulate some custom business logic or extend the built-in SQL functions and operators. SQL functions are also
     25 +referred to as **stored procedures**.
    26 26   
    27  -Hasura GraphQL Engine lets you expose certain types of custom functions as top level fields in the GraphQL API to allow
    28  -querying them with either `queries` or `subscriptions`, or for `VOLATILE` functions as `mutations`.
     27 +Hasura GraphQL Engine lets you expose certain types of user-defined functions as top level fields in the GraphQL API to
     28 +allow querying them with either `queries` or `subscriptions`, or for `VOLATILE` functions as `mutations`. These are
     29 +called Custom functions.
    29 30   
    30  -:::info Note
     31 +:::info Functions as computed fields
    31 32   
    32  -Custom SQL functions can also be queried as [computed fields](/schema/postgres/computed-fields.mdx) of tables.
     33 +User-defined SQL functions can also be added as [computed fields](/schema/postgres/computed-fields.mdx) of tables.
    33 34   
    34 35  :::
    35 36   
    36 37  ### Supported SQL functions {#pg-supported-sql-functions}
    37 38   
    38  -Currently, only functions which satisfy the following constraints can be exposed as top level fields in the GraphQL API
    39  -(_terminology from_ [Postgres docs](https://www.postgresql.org/docs/current/sql-createfunction.html)):
     39 +Currently, only functions which satisfy the following constraints can be [tracked](#pg-track-custom-sql-functions) as
     40 +Custom functions to be exposed as top level fields in the GraphQL API (_terminology from_
     41 +[Postgres docs](https://www.postgresql.org/docs/current/sql-createfunction.html)):
    40 42   
    41  -- **Function behavior**: `STABLE` or `IMMUTABLE` functions may _only_ be exposed as queries. `VOLATILE` functions may
    42  - be exposed as mutations or queries.
    43  -- **Return type**: MUST be `SETOF <table-name>` OR `<table-name>` where `<table-name>` is already tracked
     43 +- **Function behavior**: `STABLE` or `IMMUTABLE` functions may _only_ be exposed as queries. `VOLATILE` functions may be
     44 + exposed as mutations or queries.
     45 +- **Return type**: **MUST** be `SETOF <table-name>` OR `<table-name>` where `<table-name>` is already tracked
    44 46  - **Argument modes**: ONLY `IN`
    45 47   
     48 +:::tip Return type workaround
     49 + 
     50 +If the required `SETOF` table doesn't already exist or your function needs to return a custom type i.e. row set, you can
     51 +[create and track](/schema/postgres/tables.mdx) an empty table with the required schema to support the function.
     52 + 
     53 +:::
     54 + 
    46 55  ## Creating SQL functions {#pg-create-sql-functions}
    47 56   
    48 57  SQL functions can be created using SQL statements which can be executed as follows:
    skipped 45 lines
    94 103  ## Track SQL functions {#pg-track-custom-sql-functions}
    95 104   
    96 105  Functions can be present in the underlying Postgres database without being exposed over the GraphQL API. In order to
    97  -expose a function over the GraphQL API, it needs to be **tracked**.
     106 +expose a function as a top level field in the GraphQL API, it needs to be **tracked** as custom functions.
    98 107   
    99 108  <Tabs groupId="user-preference" className="api-tabs">
    100 109  <TabItem value="console" label="Console">
    skipped 46 lines
    147 156   
    148 157  </TabItem>
    149 158  </Tabs>
    150  - 
    151  -:::info Note
    152  - 
    153  -If the `SETOF` table doesn't already exist or your function needs to return a custom type i.e. row set, create and track
    154  -an empty table with the required schema to support the function before executing the above steps.
    155  - 
    156  -:::
    157 159   
    158 160  ## Use cases
    159 161   
    skipped 403 lines
    563 565  defined on the function `f` for the role `r`. Additionally, role `r` must have SELECT permissions on the returning table
    564 566  of the function `f`.
    565 567   
    566  -[Access control permissions](/auth/authorization/permissions/index.mdx) configured for the `SETOF` table of a
    567  -function are also applicable to the function itself.
     568 +[Access control permissions](/auth/authorization/permissions/index.mdx) configured for the `SETOF` table of a function
     569 +are also applicable to the function itself.
    568 570   
    569 571  **For example**, in our text-search example above, if the role `user` has access only to certain columns of the table
    570 572  `article`, a validation error will be thrown if the `search_articles` query is run selecting a column to which the
    skipped 11 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/postgres/input-validations.mdx
    skipped 233 lines
    234 234   `x-hasura-*`.
    235 235  - `data.input`: The schema of `data.input` varies per mutation type. This schema is defined below.
    236 236   
     237 +:::info Note
     238 + 
     239 +In case there are multiple headers with the same name, the order of precedence is: client headers \> resolved user
     240 +(`x-hasura-*`) variables \> configuration headers.
     241 + 
     242 +If you want to change the order of precedence to: configuration headers \> resolved user (`x-hasura-*`) variables
     243 +\> client headers, use the [configured header
     244 +precedence](deployment/graphql-engine-flags/reference.mdx/#configured-header-precedence) flag or environment variable.
     245 + 
     246 +:::
     247 + 
    237 248  #### Insert Mutations
    238 249   
    239 250  ```json
    skipped 99 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/postgres/views.mdx
    skipped 16 lines
    17 17   
    18 18  ## What are views?
    19 19   
    20  -[Views](https://www.postgresql.org/docs/current/sql-createview.html) can be used to expose the results of a custom query
    21  -as a virtual table. Views are not persisted physically i.e. the query defining a view is executed whenever data is
    22  -requested from the view.
     20 +Postgres [Views](https://www.postgresql.org/docs/current/sql-createview.html) can be used to expose the results of a
     21 +custom query as a virtual table. Views are not persisted physically i.e. the query defining a view is executed whenever
     22 +data is requested from the view.
    23 23   
    24 24  Hasura GraphQL Engine lets you expose views over the GraphQL API to allow querying them using both `queries` and
    25 25  `subscriptions` just like regular tables.
    skipped 172 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/schema/snowflake/custom-functions.mdx
    skipped 25 lines
    26 26   
    27 27  ## What are user-defined functions?
    28 28   
    29  -[User-defined functions](https://docs.snowflake.com/en/sql-reference/udf-overview) (UDFs) are custom functions that can
    30  -be used to either encapsulate some custom business logic or extend the built-in SQL functions and operators.
     29 +Snowflake [User-defined functions](https://docs.snowflake.com/en/sql-reference/udf-overview) (UDFs) are custom functions
     30 +that can be used to either encapsulate some custom business logic or extend the built-in SQL functions and operators.
    31 31   
    32  -Hasura GraphQL Engine lets you expose certain types of custom functions as top-level fields in the GraphQL API to allow
    33  -querying them.
     32 +Hasura GraphQL Engine lets you expose certain types of UDFs as top-level fields in the GraphQL API to allow querying
     33 +them.
    34 34   
    35 35  ### Supported functions {#snowflake-supported-sql-functions}
    36 36   
    skipped 86 lines
  • ■ ■ ■ ■ ■
    docs/docs/schema/snowflake/native-queries.mdx
    skipped 246 lines
    247 247   
    248 248  ```graphql
    249 249  query {
    250  - article_with_excerpt(args: { max_length: 20 }, where: { date: { _gte: "2023-03-01" } }) {
     250 + article_with_excerpt(args: { max_length: 20 }) {
    251 251   id
    252 252   title
    253 253   date
    skipped 242 lines
    496 496  use an argument to specify the name of the table in a `FROM` clause.
    497 497   
    498 498  When making a query, the arguments are specified using the `args` parameter of the query root field.
    499  - 
    500  -##### Example: `LIKE` operator
    501  - 
    502  -A commonly used operator is the `LIKE`. When used in a `WHERE` condition, it's usually written with this syntax
    503  -`WHERE Title LIKE '%word%'`.
    504  - 
    505  -In order to use it with Native Query arguments, you need to use this syntax `LIKE ('%' || {{searchTitle}} || '%')`,
    506  -where `searchTitle` is the Native Query parameter.
    507  - 
    508  -## Using the Native Query
    509  - 
    510  -You can make a GraphQL request using the specified root field name just as you would any other GraphQL query. When
    511  -making a query, the arguments are specified using the `args` parameter of the query root field.
    512  - 
    513  -```graphql
    514  -query {
    515  - <root field name>(
    516  - [args: {"<argument name>": <argument value>, ...},]
    517  - [where: ...,]
    518  - [order_by: ..., distinct_on: ...,]
    519  - [limit: ..., offset: ...]
    520  - ) {
    521  - <field 1>
    522  - <field 2>
    523  - ...
    524  - }
    525  -}
    526  -```
    527 499   
    528 500  ## Query functionality
    529 501   
    skipped 192 lines
  • ■ ■ ■ ■ ■ ■
    docs/docs/security/dynamic-secrets.mdx
     1 +---
     2 +description: Rotate database secrets without restarting Hasura GraphQL Engine
     3 +sidebar_label: Dynamic Secrets
     4 +keywords:
     5 + - hasura
     6 + - docs
     7 + - deployment
     8 + - dynamic secrets
     9 + - rotate secrets
     10 +sidebar_position: 9
     11 +---
     12 + 
     13 +import Tabs from '@theme/Tabs';
     14 +import TabItem from '@theme/TabItem';
     15 +import Thumbnail from '@site/src/components/Thumbnail';
     16 +import ProductBadge from '@site/src/components/ProductBadge';
     17 + 
     18 +# Dynamic Secrets
     19 + 
     20 +<ProductBadge ce self />
     21 + 
     22 +## Introduction
     23 + 
     24 +Dynamic secrets allow rotating database credentials without requiring you to restart the Hasura GraphQL Engine. Upon
     25 +enabling this feature, database connection strings will be read from a configured file for each new connection or upon
     26 +encountering a connection error.
     27 + 
     28 +## Configuration
     29 + 
     30 +:::tip Enabling this feature
     31 + 
     32 +To enable this feature, the environment variable `HASURA_GRAPHQL_DYNAMIC_SECRETS_ALLOWED_PATH_PREFIX` must be set and
     33 +non-empty. File paths used with this feature must start with the prefix set in this environment variable. See
     34 +[Dynamic Secrets Allowed Path Prefix](/deployment/graphql-engine-flags/reference.mdx#dynamic-secrets-allowed-path-prefix)
     35 +for reference.
     36 + 
     37 +:::
     38 + 
     39 +<Tabs groupId="user-preference" className="api-tabs">
     40 +<TabItem value="console" label="Console">
     41 + 
     42 +To add a new Postgres database with this feature, navigate to `Data` tab and click on `Data Manager`. Choose Postgres
     43 +and click `Connect Existing Database`. Choose `Dynamic URL` in the options and provide the path of the file where the
     44 +database connection string can be read from.
     45 + 
     46 +<Thumbnail
     47 + src="/img/databases/postgres/dynamic-secrets/dynamic-secrets.png"
     48 + alt="Dynamic secrets configuration for Postgres"
     49 +/>
     50 + 
     51 +</TabItem>
     52 +<TabItem value="cli" label="CLI">
     53 + 
     54 +Head to the `/metadata/databases/databases.yaml` file and add the database configuration as below:
     55 + 
     56 +```yaml
     57 +- name: pgDatabase
     58 + kind: postgres
     59 + configuration:
     60 + connection_info:
     61 + # highlight-start
     62 + database_url:
     63 + dynamic_from_file: /secrets/dbCredentials
     64 + isolation_level: read-committed
     65 + # highlight-end
     66 + use_prepared_statements: false
     67 +```
     68 + 
     69 +Apply the Metadata by running:
     70 + 
     71 +```bash
     72 +hasura metadata apply
     73 +```
     74 + 
     75 +</TabItem>
     76 +<TabItem value="api" label="API">
     77 + 
     78 +You can add data source with dynamic secrets using the
     79 +[pg_add_source](/api-reference/metadata-api/source.mdx#metadata-pg-add-source) Metadata API.
     80 + 
     81 +</TabItem>
     82 +</Tabs>
     83 + 
     84 +## Configuration for metadata database
     85 + 
     86 +To enable rotating secrets for your metadata database, the environment variable `HASURA_GRAPHQL_METADATA_DATABASE_URL`
     87 +must be set as `dynamic-from-file:///path/to/file`. The connection string to the metadata database will be read from
     88 +this file. See [Metadata Database URL](/deployment/graphql-engine-flags/reference.mdx/#metadata-database-url) for
     89 +reference.
     90 + 
     91 +## Template variables
     92 + 
     93 +Dynamic secrets can be used in template variables for data connectors. See
     94 +[Template variables](/databases/database-config/data-connector-config.mdx/#template) for reference.
     95 + 
  • ■ ■ ■ ■ ■ ■
    docs/docs/security/security-best-practices.mdx
    skipped 44 lines
    45 45  key. Subsequently, implement a plan to rotate admin secrets to limit the exposure of an admin secret being shared too
    46 46  broadly.
    47 47   
    48  -[Multiple admin secrets](/auth/authentication/multiple-admin-secrets.mdx) should be used in situations where admin secrets have
    49  -different rotation timelines or when granting temporary access is needed.
     48 +[Multiple admin secrets](/auth/authentication/multiple-admin-secrets.mdx) should be used in situations where admin
     49 +secrets have different rotation timelines or when granting temporary access is needed.
    50 50   
    51 51  Leverage [allowed operations lists](https://www.graphql-code-generator.com/plugins/other/hasura-allow-list) whenever
    52 52  possible to restrict unbounded or unexpected operations from being executed against the GraphQL endpoint. Allow lists
    skipped 29 lines
    82 82  - Review the [permissions summary](/deployment/production-checklist.mdx#review-the-summary) for each schema to verify
    83 83   permissions are constructed appropriately for your expected data access.
    84 84   
    85  -- Configure an [anonymous default role](/auth/authorization/permissions/common-roles-auth-examples.mdx#unauthorized-users-example) in
     85 +- Configure an
     86 + [anonymous default role](/auth/authorization/permissions/common-roles-auth-examples.mdx#unauthorized-users-example) in
    86 87   order to apply global security permissions. This default role should be configured similarly to any other role. This
    87 88   includes [RBAC permissions](/auth/authorization/quickstart.mdx), [API limits](/security/api-limits.mdx),
    88 89   [allowed operations lists](https://www.graphql-code-generator.com/plugins/other/hasura-allow-list) and
    skipped 25 lines
    114 115  Hasura GraphQL Engine communicates with your data sources(s) via ODBC connection strings. This means Hasura has the same
    115 116  permissions as the provided credentials in the connection string.
    116 117   
    117  -- Use environment variables rather than a hardcoded value when configuring the database connection string. This environment variable can then be reused in the other environments (e.g., staging or production) while containing a reference to the environment-specific database connection string. Please see [Metadata Best Practices](/migrations-metadata-seeds/metadata-best-practices.mdx) for more information.
     118 +- Use environment variables rather than a hardcoded value when configuring the database connection string. This
     119 + environment variable can then be reused in the other environments (e.g., staging or production) while containing a
     120 + reference to the environment-specific database connection string. Please see
     121 + [Metadata Best Practices](/migrations-metadata-seeds/metadata-best-practices.mdx) for more information.
    118 122   
    119 123  - Review the database permissions allocated via the provided credentials to ensure the level of access granted to Hasura
    120 124   is appropriate.
    121 125   
    122 126  - Use database connections strings with the least privileges required for API operations.
    123 127   
    124  -- Configure [read replicas](/databases/database-config/read-replicas.mdx) to route read-only operations (queries) to one (or
    125  - many) read replicas.
     128 +- Configure [read replicas](/databases/database-config/read-replicas.mdx) to route read-only operations (queries) to one
     129 + (or many) read replicas.
    126 130   
    127 131  ## Networking/API gateway
    128 132   
    129 133  We recommend the following HTTP layer security policies to be configured at the API gateway:
    130 134   
    131  -- [Configure HTTPS](/deployment/enable-https.mdx) on your reverse proxy to ensure encrypted communication between your
    132  - client and Hasura.
     135 +- [Configure HTTPS](/deployment/serve-behind-proxy.mdx) on your reverse proxy to ensure encrypted communication between
     136 + your client and Hasura.
    133 137  - Implement request and response size restrictions.
    134 138  - Restricted allowed connection time to prevent incidents such as slowloris attacks.
    135 139  - Apply both IP filtering and IP rate limiting.
    skipped 7 lines
  • ■ ■ ■ ■ ■
    docs/docusaurus.config.js
    skipped 18 lines
    19 19   projectName: 'graphql-engine',
    20 20   staticDirectories: ['static', 'public'],
    21 21   customFields: {
    22  - docsBotEndpointURL: process.env.NODE_ENV === "development" ? "ws://localhost:8000/hasura-docs-ai" : "wss://hasura-docs-bot.deno.dev/hasura-docs-ai",
     22 + docsBotEndpointURL:
     23 + process.env.NODE_ENV === 'development'
     24 + ? 'ws://localhost:8000/hasura-docs-ai'
     25 + : 'wss://website-api.hasura.io/chat-bot/hasura-docs-ai',
    23 26   hasuraVersion: 2,
     27 + DEV_TOKEN: process.env.DEV_TOKEN,
    24 28   },
    25 29   scripts: [],
    26 30   webpack: {
    skipped 235 lines
  • ■ ■ ■ ■ ■
    docs/package.json
    skipped 40 lines
    41 41   "react-dom": "^17.0.2",
    42 42   "react-transition-group": "^4.4.2",
    43 43   "sass": "^1.49.8",
     44 + "usehooks-ts": "^2.9.1",
    44 45   "uuid": "^9.0.0"
    45 46   },
    46 47   "devDependencies": {
    skipped 22 lines
  • ■ ■ ■ ■ ■ ■
    docs/src/components/AiChatBot/AiChatBot.tsx
    1 1  import React, { useEffect, useRef, useState } from 'react';
    2  -import Markdown from 'markdown-to-jsx';
     2 +import Markdown from 'markdown-to-jsx'
    3 3  import './styles.css';
    4 4  import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
    5 5  import { CloseIcon, RespondingIconGray, SparklesIcon } from '@site/src/components/AiChatBot/icons';
    6  -import useLocalStorage from "@site/src/components/AiChatBot/useLocalStorage";
     6 +import { useLocalStorage } from 'usehooks-ts'
     7 +import profilePic from '@site/static/img/hasura-ai-profile-pic.png';
     8 + 
    7 9   
    8 10  interface Message {
    9 11   userMessage: string;
    skipped 27 lines
    37 39  ];
    38 40   
    39 41   
    40  -function AiChatBot() {
     42 +export function AiChatBot() {
    41 43   // Get the docsBotEndpointURL and hasuraVersion from the siteConfig
    42 44   const {
    43 45   siteConfig: { customFields },
    skipped 21 lines
    65 67   // Enables scrolling to the end
    66 68   const scrollDiv = useRef<HTMLDivElement>(null);
    67 69   
    68  - const { docsBotEndpointURL, hasuraVersion } = customFields as { docsBotEndpointURL: string; hasuraVersion: number };
     70 + const { docsBotEndpointURL, hasuraVersion, DEV_TOKEN } = customFields as { docsBotEndpointURL: string; hasuraVersion: number; DEV_TOKEN: string };
    69 71   
    70 72   const storedUserID = localStorage.getItem('hasuraDocsUserID') as string | "null";
    71 73   
    skipped 12 lines
    84 86   const atBottom = Math.abs(scrollDiv.current?.scrollHeight - Math.floor(e.target.scrollTop + e.target.clientHeight)) < 2;
    85 87   setIsAutoScroll(atBottom);
    86 88   };
    87  - 
    88 89   
    89 90   // Update the ref when the currentMessage changes ie: when the endpoint is responding
    90 91   useEffect(() => {
    skipped 5 lines
    96 97   let websocket;
    97 98   let reconnectInterval;
    98 99   
     100 + const queryDevToken = process.env.NODE_ENV === "development" && DEV_TOKEN ? `&devToken=${DEV_TOKEN}` : "";
     101 + 
     102 + 
     103 + console.log("process.env.NODE_ENV", process.env.NODE_ENV);
     104 + 
    99 105   const connectWebSocket = () => {
    100  - websocket = new WebSocket(encodeURI(`${docsBotEndpointURL}?version=${hasuraVersion}&userId=${storedUserID}`));
     106 + websocket = new WebSocket(encodeURI(`${docsBotEndpointURL}?version=${hasuraVersion}&userId=${storedUserID}${queryDevToken}`));
    101 107   
    102 108   websocket.onopen = () => {
    103 109   console.log('Connected to the websocket');
    skipped 100 lines
    204 210   <div className="info-bar">
    205 211   <div className={"bot-name-pic-container"}>
    206 212   <div className="bot-name">HasuraAI</div>
    207  - <img src={"/docs/img/hasura-ai-profile-pic.png"} height={30} width={30} className="bot-pic"/>
     213 + <img src={profilePic} height={30} width={30} className="bot-pic"/>
    208 214   </div>
    209  - <button className="clear-button" onClick={() => setMessages(initialMessages)}>Clear</button>
     215 + <button className="clear-button" onClick={() => {
     216 + setMessages(initialMessages)
     217 + setCurrentMessage({ userMessage: '', botResponse: '' });
     218 + }}>Clear</button>
    210 219   </div>
    211 220   <div className="messages-container" onScroll={handleScroll} ref={scrollDiv}>
    212 221   {messages.map((msg, index) => (
    skipped 54 lines
    267 276   </div>
    268 277   );
    269 278  }
    270  - 
    271  -export default AiChatBot;
    272  - 
  • ■ ■ ■ ■ ■ ■
    docs/src/components/AiChatBot/styles.css
    skipped 75 lines
    76 76   color: #333;
    77 77  }
    78 78   
     79 +.messages-container pre {
     80 + background-color: #8f8f8f;
     81 +}
     82 + 
    79 83  .formatted-text a {
    80 84   color: blue;
    81 85   text-decoration: underline;
    skipped 4 lines
    86 90   flex: 1;
    87 91  }
    88 92   
    89  -.message {
     93 +.chat-popup .message {
    90 94   border-radius: 8px;
    91 95   padding: 10px 15px;
    92 96   margin: 5px 0;
    skipped 38 lines
    131 135  .responding-message {
    132 136  }
    133 137   
    134  -input {
     138 +.chat-popup input {
    135 139   width: 80%;
    136 140   padding: 10px;
    137 141   border-radius: 5px 0 0 5px;
    skipped 4 lines
    142 146   flex: 1;
    143 147  }
    144 148   
    145  -.input-container {
     149 +.chat-popup .input-container {
    146 150   display: flex;
    147 151   margin-top: auto;
    148 152   width: 100%;
    skipped 1 lines
    150 154   background-color: #fff;
    151 155  }
    152 156   
    153  -.input-text {
     157 +.chat-popup .input-text {
    154 158   font-size: 16px;
    155 159   color: #333;
    156 160   background-color: white;
    157 161  }
    158 162   
    159  -.input-text:disabled {
     163 +.chat-popup .input-text:disabled {
    160 164   background-color: #eeeeee !important;
    161 165  }
    162 166   
    163  -.input-button {
     167 +.chat-popup .input-button {
    164 168   background-color: #1699e2;
    165 169   color: white;
    166 170   padding-left: 15px;
    skipped 4 lines
    171 175   cursor: pointer;
    172 176  }
    173 177   
    174  -.input-button:disabled {
     178 +.chat-popup .input-button:disabled {
    175 179   background-color: #ababab;
    176 180  }
    177 181   
    178  -.info-bar {
     182 +.chat-popup .info-bar {
    179 183   display: flex;
    180 184   justify-content: space-between;
    181 185   align-items: center;
    skipped 26 lines
    208 212   font-size: 0.9rem;
    209 213  }
    210 214   
    211  -html[data-theme=dark] code {
     215 +html[data-theme=dark] .messages-container code {
    212 216   background-color: #e0e0e0;
    213 217  }
  • ■ ■ ■ ■ ■ ■
    docs/src/components/AiChatBot/useLocalStorage.ts
    1  -import { useState } from 'react';
    2  -export const useLocalStorage = <T>(key: string, defaultValue: T) => {
    3  - // Create state variable to store localStorage value in state
    4  - const [localStorageValue, setLocalStorageValue] = useState(() => {
    5  - try {
    6  - const value = localStorage.getItem(key);
    7  - // If value is already present in localStorage then return it
    8  - 
    9  - // Else set default value in localStorage and then return it
    10  - if (value) {
    11  - let parsedValue = JSON.parse(value);
    12  - 
    13  - if (Array.isArray(parsedValue)) {
    14  - const filteredValue = parsedValue.filter(item => !!item);
    15  - 
    16  - // Update localStorage if non-truthy values were filtered out
    17  - if (filteredValue.length !== parsedValue.length) {
    18  - parsedValue = filteredValue;
    19  - localStorage.setItem(key, JSON.stringify(filteredValue));
    20  - }
    21  - }
    22  - 
    23  - return parsedValue as T;
    24  - } else {
    25  - localStorage.setItem(key, JSON.stringify(defaultValue));
    26  - return defaultValue;
    27  - }
    28  - } catch (error) {
    29  - localStorage.setItem(key, JSON.stringify(defaultValue));
    30  - return defaultValue;
    31  - }
    32  - });
    33  - 
    34  - // this method update our localStorage and our state
    35  - const setLocalStorageStateValue = valueOrFn => {
    36  - let newValue: T;
    37  - if (typeof valueOrFn === 'function') {
    38  - const fn = valueOrFn as (value: T) => T;
    39  - newValue = fn(localStorageValue);
    40  - } else {
    41  - newValue = valueOrFn;
    42  - }
    43  - 
    44  - // Filter out non-truthy values if newValue is an array
    45  - if (Array.isArray(newValue)) {
    46  - newValue = newValue.filter(item => !!item) as T;
    47  - }
    48  - 
    49  - localStorage.setItem(key, JSON.stringify(newValue));
    50  - setLocalStorageValue(newValue);
    51  - };
    52  - 
    53  - return [localStorageValue, setLocalStorageStateValue] as const;
    54  -};
    55  - 
    56  -export default useLocalStorage;
    57  - 
  • ■ ■ ■ ■ ■ ■
    docs/src/components/CustomDocItem/index.tsx
    skipped 4 lines
    5 5  import CustomFooter from '@site/src/components/CustomFooter';
    6 6  import styles from './styles.module.scss';
    7 7  import { Redirect } from '@docusaurus/router';
    8  -import AiChatBot from "@site/src/components/AiChatBot/AiChatBot";
     8 +import { AiChatBot } from "@site/src/components/AiChatBot/AiChatBot";
    9 9  import BrowserOnly from '@docusaurus/BrowserOnly';
    10 10  const CustomDocItem = props => {
    11 11   useEffect(() => {
    skipped 68 lines
    80 80   <GraphQLWithHasuraBanner />
    81 81   <BrowserOnly fallback={<div>Loading...</div>}>
    82 82   {() => <AiChatBot/>}
    83  - </BrowserOnly>
     83 + </BrowserOnly>
    84 84   <CustomFooter />
    85 85   </div>
    86 86   </div>
    skipped 5 lines
  • ■ ■ ■ ■ ■ ■
    docs/src/css/header.scss
    skipped 2 lines
    3 3   
    4 4  /* Docusaurus Specific Styles */
    5 5  .header-github-link {
     6 + display: flex;
     7 + gap: 8px;
     8 + align-items: center;
    6 9   &::before {
    7 10   content: '';
    8 11   width: 24px;
    skipped 1 lines
    10 13   display: flex;
    11 14   background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
    12 15   no-repeat;
     16 + }
     17 + 
     18 + &::after {
     19 + content: '30.5k тнР';
    13 20   }
    14 21   
    15 22   &:hover {
    skipped 144 lines
  • docs/static/img/databases/data-connector/connect-final.png
  • docs/static/img/databases/postgres/dynamic-secrets/dynamic-secrets.png
  • docs/static/img/migrations-metadata-seeds/metadata-reset.png
  • ■ ■ ■ ■ ■ ■
    docs/yarn.lock
    skipped 5521 lines
    5522 5522   sass: ^1.49.8
    5523 5523   swc-loader: ^0.2.3
    5524 5524   typescript: ^4.8.4
     5525 + usehooks-ts: ^2.9.1
    5525 5526   uuid: ^9.0.0
    5526 5527   languageName: unknown
    5527 5528   linkType: soft
    skipped 5900 lines
    11428 11429   "@types/react":
    11429 11430   optional: true
    11430 11431   checksum: ed3f2ddddf6f21825e2ede4c2e0f0db8dcce5129802b69d1f0575fc1b42380436e8c76a6cd885d4e9aa8e292e60fb8b959c955f33c6a9123b83814a1a1875367
     11432 + languageName: node
     11433 + linkType: hard
     11434 + 
     11435 +"usehooks-ts@npm:^2.9.1":
     11436 + version: 2.9.1
     11437 + resolution: "usehooks-ts@npm:2.9.1"
     11438 + peerDependencies:
     11439 + react: ^16.8.0 || ^17.0.0 || ^18.0.0
     11440 + react-dom: ^16.8.0 || ^17.0.0 || ^18.0.0
     11441 + checksum: 36f1e4142ce23bc019b81d2e93aefd7f2c350abcf255598c21627114a69a2f2f116b35dc3a353375f09c6e4c9b704a04f104e3d10e98280545c097feca66c30a
    11431 11442   languageName: node
    11432 11443   linkType: hard
    11433 11444   
    skipped 512 lines
  • ■ ■ ■ ■ ■
    frontend/apps/console-ce/src/css/legacy-boostrap.css
    skipped 2324 lines
    2325 2325   appearance: none;
    2326 2326  }
    2327 2327   
    2328  -:where(.bootstrap-jail) input[type='radio'],
    2329 2328  :where(.bootstrap-jail) input[type='checkbox'] {
    2330 2329   margin: 4px 0 0;
    2331 2330   margin-top: 1px \9;
    skipped 5888 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/ApiExplorer/Analyzer/QueryAnalyzer.js
    skipped 94 lines
    95 95   <div className="w-full">
    96 96   <div className="p-md pt-0">
    97 97   <div className="text-[#767e93] font-bold py-sm">
    98  - Generated SQL
     98 + Generated Query
    99 99   </div>
    100 100   <div className="w-full overflow-y-scroll h-[calc(30vh)] mb-sm">
    101 101   <Button
    skipped 88 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/ApiExplorer/GraphiQLWrapper/GraphiQLWrapper.js
    skipped 136 lines
    137 137   const handleClickPrettifyButton = () => {
    138 138   trackGraphiQlToolbarButtonClick('Prettify');
    139 139   
    140  - const editor = graphiqlContext.getQueryEditor();
    141  - const currentText = editor.getValue();
    142  - const prettyText = print(sdlParse(currentText));
    143  - editor.setValue(prettyText);
     140 + const queryEditor = graphiqlContext.getQueryEditor();
     141 + const currentQueryText = queryEditor.getValue();
     142 + const prettyQueryText = print(sdlParse(currentQueryText));
     143 + queryEditor.setValue(prettyQueryText);
     144 + 
     145 + try {
     146 + const variableEditor = graphiqlContext.getVariableEditor();
     147 + const currentVariableText = variableEditor.getValue();
     148 + const prettyVariableText = JSON.stringify(
     149 + JSON.parse(currentVariableText),
     150 + null,
     151 + 2
     152 + );
     153 + variableEditor.setValue(prettyVariableText);
     154 + } catch (err) {
     155 + // Ignore json parse errors, since we can't format invalid json anyway
     156 + }
    144 157   };
    145 158   
    146 159   const handleToggleHistory = () => {
    skipped 310 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/ApiExplorer/TopNav.tsx
    skipped 6 lines
    7 7  import { FaExclamationCircle, FaRegMap } from 'react-icons/fa';
    8 8  import { sendTelemetryEvent } from '../../../telemetry';
    9 9  import { useGetSchemaRegistryNotificationColor } from '../../../features/SchemaRegistry/hooks/useGetSchemaRegistryNotificationColor';
    10  -import { getLSItem, LS_KEYS, setLSItem } from '../../../utils';
     10 +import { getLSItem, isCloudConsole, LS_KEYS, setLSItem } from '../../../utils';
    11 11  import {
    12 12   BreakingChangesColor,
    13 13   BreakingChangesTooltipMessage,
    skipped 25 lines
    39 39   dataTestVal: 'rest-explorer-link',
    40 40   title: 'REST',
    41 41   },
    42  - {
    43  - key: 'schema-registry',
    44  - link: '/api/schema-registry',
    45  - dataTestVal: 'schema-registry-link',
    46  - title: 'Schema Registry',
    47  - },
    48 42   ],
    49 43   [
    50 44   {
    skipped 12 lines
    63 57   dataTestVal: 'security-explorer-link',
    64 58   title: 'Security',
    65 59   });
     60 + 
     61 + if (
     62 + isCloudConsole(globals) &&
     63 + (globals.userRole === 'admin' || globals.userRole === 'owner')
     64 + ) {
     65 + sectionsData[0].push({
     66 + key: 'schema-registry',
     67 + link: '/api/schema-registry',
     68 + dataTestVal: 'schema-registry-link',
     69 + title: 'Schema Registry',
     70 + });
     71 + }
    66 72   }
    67 73   
    68 74   const isActive = (link: string) => {
    skipped 113 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/Data/DataRouter.js
    skipped 42 lines
    43 43  import { NativeQueryRoute } from '../../../features/Data/LogicalModels/AddNativeQuery/NativeQueryLandingPage';
    44 44  import { LogicalModelRoute } from '../../../features/Data/LogicalModels/LogicalModel/LogicalModelLandingPage';
    45 45  import { ModelSummaryContainer } from './ModelSummary/ModelSummaryContainer';
     46 +import { PermissionSummary } from '../../../features/Data/ManageDatabase/parts/PermissionSummary';
    46 47   
    47 48  const makeDataRouter = (
    48 49   connect,
    skipped 17 lines
    66 67   <Route path="connect" component={ConnectDatabaseRouteWrapper} />
    67 68   <Route path="database/add" component={ConnectUIContainer} />
    68 69   <Route path="database/edit" component={ConnectUIContainer} />
     70 + <Route
     71 + path="database/permission-summary"
     72 + component={PermissionSummary}
     73 + />
    69 74   <Route path="table" component={ManageTable}>
    70 75   <IndexRedirect to="modify" />
    71 76   <Route path=":operation" component={ManageTable} />
    skipped 178 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/components/Services/Data/RawSQL/RawSQL.js
    1  -import React, { useEffect, useState } from 'react';
     1 +import React, { useEffect, useState, useRef } from 'react';
    2 2  import PropTypes from 'prop-types';
    3 3  import Helmet from 'react-helmet';
    4 4  import AceEditor from 'react-ace';
    skipped 122 lines
    127 127   const [selectedDriver, setSelectedDriver] = useState(null);
    128 128   const [suggestLangChange, setSuggestLangChange] = useState(false);
    129 129   
     130 + const selectedDriverRef = useRef(selectedDriver);
     131 + const sqlTextRef = useRef(sqlText);
     132 + 
     133 + useEffect(() => {
     134 + selectedDriverRef.current = selectedDriver;
     135 + sqlTextRef.current = sqlText;
     136 + }, [selectedDriver, sqlText]);
     137 + 
    130 138   useEffect(() => {
    131 139   const driver = getSourceDriver(metadataSources, selectedDatabase);
    132 140   setSelectedDriver(driver);
    skipped 48 lines
    181 189   }, [sql, selectedDriver]);
    182 190   
    183 191   const submitSQL = () => {
    184  - if (!nativeDrivers.includes(selectedDriver)) {
     192 + if (!nativeDrivers.includes(selectedDriverRef.current)) {
    185 193   fetchRunSQLResult({
    186  - driver: selectedDriver,
     194 + driver: selectedDriverRef.current,
    187 195   dataSourceName: selectedDatabase,
    188  - sql: sqlText,
     196 + sql: sqlTextRef.current,
    189 197   });
    190 198   return;
    191 199   }
    192 200   
    193  - if (!sqlText) {
     201 + if (!sqlTextRef.current) {
    194 202   setLSItem(LS_KEYS.rawSQLKey, '');
    195 203   return;
    196  - } else if (checkTextLength(sqlText)) {
     204 + } else if (checkTextLength(sqlTextRef.current)) {
    197 205   // set SQL to LS
    198  - setLSItem(LS_KEYS.rawSQLKey, sqlText);
     206 + setLSItem(LS_KEYS.rawSQLKey, sqlTextRef.current);
    199 207   }
    200 208   
    201 209   // check migration mode global
    skipped 7 lines
    209 217   }
    210 218   if (!isMigration && globals.consoleMode === CLI_CONSOLE_MODE) {
    211 219   // if migration is not checked, check if is schema modification
    212  - if (services[selectedDriver].checkSchemaModification(sqlText)) {
     220 + if (
     221 + services[selectedDriverRef.current].checkSchemaModification(
     222 + sqlTextRef.current
     223 + )
     224 + ) {
    213 225   dispatch(modalOpen());
    214 226   return;
    215 227   }
    skipped 4 lines
    220 232   migrationName,
    221 233   statementTimeout,
    222 234   selectedDatabase,
    223  - selectedDriver
     235 + selectedDriverRef.current
    224 236   )
    225 237   );
    226 238   dispatch(_push('/data/sql'));
    skipped 100 lines
    327 339   {
    328 340   name: 'submit',
    329 341   bindKey: { win: 'Ctrl-Enter', mac: 'Command-Enter' },
    330  - exec: () => {
    331  - if (sqlText) {
     342 + exec: editor => {
     343 + if (editor?.getValue()) {
    332 344   submitSQL();
    333 345   }
    334 346   },
    skipped 348 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/BrowseRows/components/RunQuery/Filter/FilterRow.tsx
    skipped 26 lines
    27 27   const localValue = watch(`${name}.value`);
    28 28   
    29 29   /**
     30 + * Set operator to first operator if it is empty
     31 + */
     32 + useEffect(() => {
     33 + if (!localOperator && operatorOptions[0]?.value) {
     34 + setValue(`${name}.operator`, operatorOptions[0].value);
     35 + }
     36 + }, [localOperator, operatorOptions, setValue, name]);
     37 + 
     38 + /**
    30 39   * Set the default value into the input field depending on the operator type
    31 40   */
    32 41   useEffect(() => {
    skipped 40 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/BrowseRows/components/RunQuery/Filter/FilterRows.tsx
    skipped 51 lines
    52 52   onRemove?.();
    53 53   };
    54 54   
    55  - const columnOptions: SelectItem[] = columns.map(column => {
    56  - return {
    57  - label: column.name,
    58  - value: column.name,
    59  - };
    60  - });
     55 + const columnOptions: SelectItem[] = columns
     56 + .sort((a, b) => (a.name > b.name ? 1 : -1))
     57 + .map(column => {
     58 + return {
     59 + label: column.name,
     60 + value: column.name,
     61 + };
     62 + });
    61 63   
    62 64   const operatorOptions: SelectItem[] = operators.map(operator => ({
    63 65   label: `[${operator.value}] ${operator.name}`,
    skipped 45 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/BrowseRows/components/RunQuery/LegacyRunQueryContainer/LegacyRunQuery.stories.tsx
    skipped 76 lines
    77 77   expect(await canvas.findAllByDisplayValue('Select a column')).toHaveLength(
    78 78   2
    79 79   );
    80  - expect(await canvas.findByDisplayValue('Select an operator')).toBeVisible();
     80 + expect(await canvas.findByTestId('filters.0.operator')).toBeVisible();
    81 81   expect(await canvas.findByPlaceholderText('-- value --')).toBeVisible();
    82 82   
    83 83   // select the value "id" in the "column" select
    skipped 4 lines
    88 88   
    89 89   // select the value "_eq" in the "operator" select
    90 90   userEvent.selectOptions(
    91  - await canvas.findByDisplayValue('Select an operator'),
     91 + await canvas.findByTestId('filters.0.operator'),
    92 92   '_eq'
    93 93   );
    94 94   
    skipped 13 lines
    108 108   );
    109 109   
    110 110   // select the value "_neq" in the second "operator" select
     111 + 
    111 112   userEvent.selectOptions(
    112  - await canvas.findByDisplayValue('Select an operator'),
     113 + await canvas.findByTestId('filters.1.operator'),
    113 114   '_neq'
    114 115   );
    115 116   
    skipped 22 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/BrowseRows/components/RunQuery/Sort/SortRows.tsx
    skipped 31 lines
    32 32   }
    33 33   }, [initialSorts?.length]);
    34 34   
    35  - const columnOptions: SelectItem[] = columns.map(column => {
    36  - return {
    37  - label: column.name,
    38  - value: column.name,
    39  - };
    40  - });
     35 + const columnOptions: SelectItem[] = columns
     36 + .sort((a, b) => (a.name > b.name ? 1 : -1))
     37 + .map(column => {
     38 + return {
     39 + label: column.name,
     40 + value: column.name,
     41 + };
     42 + });
    41 43   
    42 44   const orderByOptions = [
    43 45   {
    skipped 48 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/BrowseRows/hooks/useExportRows/useExportRows.test.ts
    skipped 166 lines
    167 167   
    168 168   expect(downloadObjectAsJsonFile).not.toHaveBeenCalled();
    169 169   expect(downloadObjectAsCsvFile).toHaveBeenCalledWith(
    170  - expect.stringContaining('export_Album_'),
     170 + expect.stringContaining('export_public_Album'),
    171 171   expectedResult
    172 172   );
    173 173   });
    skipped 7 lines
    181 181   
    182 182   expect(downloadObjectAsCsvFile).not.toHaveBeenCalled();
    183 183   expect(downloadObjectAsJsonFile).toHaveBeenCalledWith(
    184  - expect.stringContaining('export_Album_'),
     184 + expect.stringContaining('export_public_Album'),
    185 185   expectedResult
    186 186   );
    187 187   });
    skipped 2 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/components/ConnectMssqlWidget/parts/PoolSettings.tsx
    skipped 11 lines
    12 12   <NumberInputField
    13 13   name={`${name}.idleTimeout`}
    14 14   label="Idle Timeout"
    15  - placeholder="180"
     15 + placeholder="5"
    16 16   tooltip="The idle timeout (in seconds) per connection"
    17 17   />
    18 18   </>
    skipped 3 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/components/ConnectPostgresWidget/parts/PoolSettings.tsx
    skipped 32 lines
    33 33   type="number"
    34 34   name={`${name}.idleTimeout`}
    35 35   label="Idle Timeout"
    36  - placeholder="180"
     36 + placeholder={isCloudConsole(globals) ? '30' : '180'}
    37 37   tooltip="The idle timeout (in seconds) per connection"
    38 38   fieldProps={commonFieldProps}
    39 39   />
    skipped 28 lines
  • frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/graphics/db-logos/clickhouse.svg
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/graphics/db-logos/dbLogos.ts
     1 +import postgresLogo from './postgres.webp';
     2 +import googleLogo from './google.webp';
     3 +import microsoftLogo from './microsoft.webp';
     4 +import citusLogo from './citus.webp';
     5 +import cockroachLogo from './cockroach.webp';
     6 +import amazonLogo from './amazon.webp';
     7 +import snowflakeLogo from './snowflake.webp';
     8 +import mysqlLogo from './mysql.webp';
     9 +import sqliteLogo from './sqlite.webp';
     10 +import mariadbLogo from './mariadb.webp';
     11 +import oracleLogo from './oracle.webp';
     12 +import mongodbLogo from './mongodb.svg';
     13 +import clickhouseLogo from './clickhouse.svg';
     14 +import trinoLogo from './trino.svg';
     15 + 
     16 +export const dbLogos: Record<string, string> = {
     17 + pg: postgresLogo,
     18 + postgres: postgresLogo,
     19 + alloy: googleLogo,
     20 + citus: citusLogo,
     21 + cockroach: cockroachLogo,
     22 + mssql: microsoftLogo,
     23 + bigquery: googleLogo,
     24 + snowflake: snowflakeLogo,
     25 + athena: amazonLogo,
     26 + mysql8: mysqlLogo,
     27 + mysql: mysqlLogo,
     28 + sqlite: sqliteLogo,
     29 + mariadb: mariadbLogo,
     30 + oracle: oracleLogo,
     31 + mongo: mongodbLogo,
     32 + mongodb: mongodbLogo,
     33 + clickhouse: clickhouseLogo,
     34 + trino: trinoLogo,
     35 +};
     36 + 
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/graphics/db-logos/index.ts
    1  -import postgresLogo from './postgres.webp';
    2  -import googleLogo from './google.webp';
    3  -import microsoftLogo from './microsoft.webp';
    4  -import citusLogo from './citus.webp';
    5  -import cockroachLogo from './cockroach.webp';
    6  -import amazonLogo from './amazon.webp';
    7  -import snowflakeLogo from './snowflake.webp';
    8  -import defaultDbLogo from './default.svg';
    9  -import mysqlLogo from './mysql.webp';
    10  -import sqliteLogo from './sqlite.webp';
    11  -import mariadbLogo from './mariadb.webp';
    12  -import oracleLogo from './oracle.webp';
    13  -import mongodbLogo from './mongodb.svg';
    14  - 
    15  -const dbLogos: Record<string, string> = {
    16  - pg: postgresLogo,
    17  - postgres: postgresLogo,
    18  - alloy: googleLogo,
    19  - citus: citusLogo,
    20  - cockroach: cockroachLogo,
    21  - mssql: microsoftLogo,
    22  - bigquery: googleLogo,
    23  - snowflake: snowflakeLogo,
    24  - athena: amazonLogo,
    25  - default: defaultDbLogo,
    26  - mysql8: mysqlLogo,
    27  - sqlite: sqliteLogo,
    28  - mariadb: mariadbLogo,
    29  - oracle: oracleLogo,
    30  - mongo: mongodbLogo,
    31  -};
    32  - 
    33  -export default dbLogos;
     1 +export { dbLogos } from './dbLogos';
     2 +export { resolveDbLogo } from './resolveDbLogo';
    34 3   
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/graphics/db-logos/resolveDbLogo.ts
     1 +import defaultDbLogo from './default.svg';
     2 +import { dbLogos } from './dbLogos';
     3 + 
     4 +export const resolveDbLogo = (dbKind: string) => {
     5 + const kind = dbKind.toLowerCase();
     6 + if (dbLogos[kind]) {
     7 + return dbLogos[kind];
     8 + } else {
     9 + const fuzzyFind = Object.keys(dbLogos).find(
     10 + key => key.includes(kind) || kind.includes(key)
     11 + );
     12 + 
     13 + if (fuzzyFind) {
     14 + return dbLogos[fuzzyFind];
     15 + }
     16 + return defaultDbLogo;
     17 + }
     18 +};
     19 + 
  • frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/graphics/db-logos/trino.svg
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/hooks/useConnectDatabaseDrivers.tsx
    1 1  import { sortBy, uniqBy } from 'lodash'; // eslint-disable-line @typescript-eslint/no-restricted-imports
    2 2  import { useAvailableDrivers } from '../../ConnectDB/hooks';
    3 3  import { DriverInfo } from '../../DataSource';
    4  -import { DatabaseLogo } from '../components';
    5  -import dbLogos from '../graphics/db-logos';
    6 4  import { SuperConnectorDrivers as SuperDrivers } from '../../hasura-metadata-types';
     5 +import { DatabaseLogo } from '../components';
     6 +import { resolveDbLogo } from '../graphics/db-logos';
    7 7   
    8 8  type useDatabaseConnectDriversProps = {
    9 9   onFirstSuccess?: (data: DriverInfo[]) => void;
    skipped 86 lines
    96 96   <DatabaseLogo
    97 97   title={d.displayName}
    98 98   noConnection={d.available === false}
    99  - image={dbLogos[d.name] || dbLogos.default}
     99 + image={resolveDbLogo(d.name)}
    100 100   releaseName={d.release}
    101 101   />
    102 102   ),
    skipped 5 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/mocks/data.mock.ts
    skipped 253 lines
    254 254   kind: 'mysql8',
    255 255   release_name: 'Alpha',
    256 256   },
     257 + {
     258 + available: true,
     259 + builtin: false,
     260 + display_name: 'ClickHouse',
     261 + kind: 'ClickHouse',
     262 + release_name: '2.35.2',
     263 + },
     264 + {
     265 + available: true,
     266 + builtin: false,
     267 + display_name: 'MongoDB',
     268 + kind: 'mongo',
     269 + release_name: 'GA',
     270 + },
     271 + {
     272 + available: true,
     273 + builtin: false,
     274 + display_name: 'Trino',
     275 + kind: 'Trino',
     276 + release_name: 'Beta',
     277 + },
    257 278   ],
    258 279   },
    259 280   agentsAddedSuperConnectorNotAvailable: {
    skipped 102 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ConnectDBRedesign/prototypes/ConnectDatabaseSidebar.tsx
    skipped 6 lines
    7 7  import { InputField, useConsoleForm } from '../../../new-components/Form';
    8 8  import { DriverInfo } from '../../DataSource';
    9 9  import { useAddAgent } from '../../ManageAgents/hooks';
    10  -import { agentPaths } from '../components/SetupConnector/hooks/useSuperConnectorAgents';
     10 +import { SuperConnectorDrivers } from '../../hasura-metadata-types';
    11 11  import { ConnectDatabaseProps } from '../ConnectDatabase';
    12  -import dbLogos from '../graphics/db-logos';
     12 +import { agentPaths } from '../components/SetupConnector/hooks/useSuperConnectorAgents';
     13 +import { resolveDbLogo } from '../graphics/db-logos/';
    13 14  import { useDatabaseConnectDrivers } from '../hooks';
    14  -import { SuperConnectorDrivers } from '../../hasura-metadata-types';
    15 15   
    16 16  export const ConnectDatabaseSidebar = (props: ConnectDatabaseProps) => {
    17 17   const { consoleType } = props;
    skipped 119 lines
    137 137   <img
    138 138   className="h-12 w-12 object-contain"
    139 139   alt={`${d.displayName} logo`}
    140  - src={dbLogos[d.name] ?? dbLogos.default}
     140 + src={resolveDbLogo(d.name)}
    141 141   />
    142 142   <div className="font-bold text-muted transition-all duration-200 group-hover:text-slate-900 flex-grow items-center flex flex-row justify-between">
    143 143   {d.displayName}
    skipped 26 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/LogicalModels/StoredProcedures/StoredProcedureWidget.stories.tsx
    skipped 53 lines
    54 54   {},
    55 55   { timeout: 4000 }
    56 56   ),
    57  - 'stored_procedure_1'
     57 + 'dbo.stored_procedure_1'
    58 58   );
    59 59   
    60 60   fireEvent.click(await canvas.findByText('Add new argument'));
    skipped 47 lines
    108 108   {},
    109 109   { timeout: 4000 }
    110 110   ),
    111  - 'stored_procedure_1'
     111 + 'dbo.stored_procedure_1'
    112 112   );
    113 113   
    114 114   fireEvent.click(await canvas.findByText('Add new argument'));
    skipped 49 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/ManageDatabase/ManageDatabase.tsx
    skipped 10 lines
    11 11  import { useDriverCapabilities } from '../hooks/useDriverCapabilities';
    12 12  import { TAB_COLORS } from './constants';
    13 13  import { BreadCrumbs, CollapsibleResource, SourceName } from './parts';
     14 +import { Link } from 'react-router';
     15 +import { Button } from '../../../new-components/Button';
     16 +import { managePermissionSummaryUrl } from '../../DataSidebar/navigation-utils';
    14 17   
    15 18  export interface ManageDatabaseProps {
    16 19   dataSourceName: string;
    skipped 31 lines
    48 51   <div className="w-full overflow-y-auto bg-gray-50">
    49 52   <div className="px-md pt-md mb-xs">
    50 53   <BreadCrumbs dataSourceName={dataSourceName} />
    51  - <SourceName dataSourceName={dataSourceName} schema={schema} />
     54 + <div className="flex items-center">
     55 + <SourceName dataSourceName={dataSourceName} schema={schema} />
     56 + <Link
     57 + to={managePermissionSummaryUrl(dataSourceName)}
     58 + style={{ marginLeft: '20px' }}
     59 + >
     60 + <Button size="sm">Show Permissions Summary</Button>
     61 + </Link>
     62 + </div>
    52 63   </div>
    53 64   <div className="px-md group relative gap-2 flex-col flex">
    54 65   {USE_TABS ? (
    skipped 158 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/ManageDatabase/parts/PermissionSummary.tsx
     1 +import Helmet from 'react-helmet';
     2 +import { useURLParameters } from '../../ManageFunction/hooks/useUrlParameters';
     3 +import { areTablesEqual, useMetadata } from '../../../hasura-metadata-api';
     4 +import Select from 'react-select';
     5 +import { useEffect, useState } from 'react';
     6 +import { MetadataTable } from '../../../hasura-metadata-types';
     7 +import { FaCheck, FaEdit, FaTimes } from 'react-icons/fa';
     8 +import { useDispatch } from 'react-redux';
     9 +import { getRoute } from '../../../../utils/getDataRoute';
     10 +import _push from '../../../../components/Services/Data/push';
     11 +import { BreadCrumbs } from './BreadCrumbs';
     12 +import { getTableDisplayName } from '../../TrackResources/TrackRelationships/utils';
     13 + 
     14 +type PermissionType =
     15 + | 'select_permissions'
     16 + | 'insert_permissions'
     17 + | 'update_permissions'
     18 + | 'delete_permissions';
     19 + 
     20 +const permissionType = [
     21 + {
     22 + label: 'Select permissions',
     23 + value: 'select_permissions',
     24 + },
     25 + {
     26 + label: 'Insert permissions',
     27 + value: 'insert_permissions',
     28 + },
     29 + {
     30 + label: 'Update permissions',
     31 + value: 'update_permissions',
     32 + },
     33 + {
     34 + label: 'Delete permissions',
     35 + value: 'delete_permissions',
     36 + },
     37 +];
     38 + 
     39 +export const PermissionSummary = () => {
     40 + const [selectedPermissionType, setSelectedPermissionType] =
     41 + useState<PermissionType>();
     42 + const { data: metadata, refetch } = useMetadata();
     43 + useEffect(() => {
     44 + refetch();
     45 + }, []);
     46 + const dispatch = useDispatch();
     47 + const urlData = useURLParameters(window.location);
     48 + if (urlData.querystringParseResult === 'error')
     49 + return <>Something went wrong while parsing the URL parameters</>;
     50 + const { database } = urlData.data;
     51 + const metadataObjectOfCurrentDatasource = metadata?.metadata.sources.find(
     52 + i => i.name === database
     53 + );
     54 + 
     55 + const dataSourceTables = metadataObjectOfCurrentDatasource?.tables;
     56 + 
     57 + const getRoles = (): string[] => {
     58 + const roles = new Set<string>();
     59 + 
     60 + dataSourceTables?.forEach(table => {
     61 + const permissionTypes: PermissionType[] = [
     62 + 'select_permissions',
     63 + 'insert_permissions',
     64 + 'update_permissions',
     65 + 'delete_permissions',
     66 + ];
     67 + 
     68 + permissionTypes.forEach(permissionType => {
     69 + const permissions = table[permissionType];
     70 + permissions?.forEach(permission => {
     71 + roles.add(permission.role);
     72 + });
     73 + });
     74 + });
     75 + 
     76 + return Array.from(roles);
     77 + };
     78 + 
     79 + const getTablePermissions = (
     80 + table: MetadataTable | undefined,
     81 + type: PermissionType
     82 + ) => {
     83 + switch (type) {
     84 + case 'select_permissions':
     85 + return table?.select_permissions || [];
     86 + case 'insert_permissions':
     87 + return table?.insert_permissions || [];
     88 + case 'update_permissions':
     89 + return table?.update_permissions || [];
     90 + case 'delete_permissions':
     91 + return table?.delete_permissions || [];
     92 + default:
     93 + return [];
     94 + }
     95 + };
     96 + 
     97 + return (
     98 + <div className="pl-5 w-fit pt-5">
     99 + <Helmet title="Permissions Summary | Hasura" />
     100 + <BreadCrumbs dataSourceName={database} />
     101 + <div className="my-5">
     102 + <h2 className="font-extrabold text-xl pb-5">
     103 + Permissions summary - {database}
     104 + </h2>
     105 + </div>
     106 + <div className="pb-2 max-w-[200px]">
     107 + <Select
     108 + value={permissionType.find(
     109 + option => option.value === selectedPermissionType
     110 + )}
     111 + options={permissionType}
     112 + defaultValue={permissionType.find(
     113 + option => option.value === 'select_permissions'
     114 + )}
     115 + onChange={(
     116 + selectedOption: { value: string; label: string } | null
     117 + ) => {
     118 + if (selectedOption === null) return;
     119 + setSelectedPermissionType(selectedOption.value as PermissionType);
     120 + }}
     121 + />
     122 + </div>
     123 + <table className="w-full border border-gray-300 rounded-md">
     124 + <thead>
     125 + <tr>
     126 + <th className="px-4 py-2 border" />
     127 + {getRoles().length ? (
     128 + getRoles().map(role => (
     129 + <th
     130 + key={role}
     131 + className="border py-2 px-4 font-bold min-w-24"
     132 + style={{ minWidth: '100px' }}
     133 + >
     134 + {role}
     135 + </th>
     136 + ))
     137 + ) : (
     138 + <th
     139 + className="border py-2 px-4 font-bold min-w-24"
     140 + style={{ minWidth: '100px' }}
     141 + >
     142 + No Role found
     143 + </th>
     144 + )}
     145 + </tr>
     146 + </thead>
     147 + <tbody>
     148 + {dataSourceTables?.length ? (
     149 + dataSourceTables.map(({ table }, rowIndex) => (
     150 + <tr key={getTableDisplayName(table)}>
     151 + <th
     152 + className={`border py-2 px-4 min-w-24 ${
     153 + rowIndex === 0 ? 'border-t' : ''
     154 + }`}
     155 + >
     156 + <span className="font-bold">
     157 + {getTableDisplayName(table)}
     158 + </span>
     159 + </th>
     160 + {getRoles().map((role, colIndex) => {
     161 + const permission = getTablePermissions(
     162 + dataSourceTables?.find(t => areTablesEqual(t.table, table)),
     163 + selectedPermissionType ?? 'select_permissions'
     164 + );
     165 + 
     166 + const isChecked =
     167 + permission?.some(
     168 + (p: { role: string }) => p.role === role
     169 + ) ?? false;
     170 + 
     171 + return (
     172 + <td
     173 + key={`${getTableDisplayName(table)}-${role}`}
     174 + className={`border p-2 min-w-24 ${
     175 + rowIndex === 0 ? 'border-t' : ''
     176 + } ${colIndex === 0 ? 'border-l' : ''}`}
     177 + >
     178 + {isChecked ? (
     179 + <span
     180 + onClick={() =>
     181 + dispatch(
     182 + _push(
     183 + getRoute().table(database, table, 'permissions')
     184 + )
     185 + )
     186 + }
     187 + role="img"
     188 + aria-label="crossmark"
     189 + className="text-green-500 flex justify-center group transition-transform transform hover:scale-110 cursor-pointer"
     190 + >
     191 + <FaCheck className="default-icon" />
     192 + <FaEdit className="hover-icon hidden group-hover:inline-block text-slate-400 ml-2" />
     193 + </span>
     194 + ) : (
     195 + <span
     196 + role="img"
     197 + aria-label="crossmark"
     198 + className="text-red-500 flex justify-center"
     199 + >
     200 + <FaTimes />
     201 + </span>
     202 + )}
     203 + </td>
     204 + );
     205 + })}
     206 + </tr>
     207 + ))
     208 + ) : (
     209 + <th className="border py-2 px-4 min-w-24">
     210 + <span className="font-bold">No table found</span>
     211 + </th>
     212 + )}
     213 + </tbody>
     214 + </table>
     215 + </div>
     216 + );
     217 +};
     218 + 
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/ManageDatabase/parts/SchemaDropdown.tsx
    skipped 86 lines
    87 87   });
    88 88   };
    89 89   
    90  - const handlePermissionsSummary = (schema: string) => {
    91  - push(`data/${dataSourceName}/schema/${schema}/permissions`);
    92  - };
    93  - 
    94 90   const handleCreateTable = (schema: string) => {
    95 91   push(`data/${dataSourceName}/schema/${schema}/table/add`);
    96 92   };
    skipped 48 lines
    145 141   </DropDown.BasicItem>
    146 142   ))}
    147 143   </DropDown.SubMenu>
    148  - <DropDown.SubMenu label="Permissions Summary">
    149  - <DropDown.Label>Choose Schema To View...</DropDown.Label>
    150  - {schemas.map(name => (
    151  - <DropDown.BasicItem
    152  - onClick={() => handlePermissionsSummary(name)}
    153  - key={name}
    154  - >
    155  - {name}
    156  - </DropDown.BasicItem>
    157  - ))}
    158  - </DropDown.SubMenu>
     144 + 
    159 145   <DropDown.Label>Tables</DropDown.Label>
    160 146   <DropDown.SubMenu
    161 147   label={
    skipped 20 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/ManageTable/ManageTable.tsx
    skipped 125 lines
    126 126   const urlData = useTableDefinition(window.location);
    127 127   const dispatch = useDispatch();
    128 128   
    129  - console.log('>>> urlData', urlData);
    130  - 
    131 129   if (urlData.querystringParseResult === 'error')
    132 130   throw Error('Unable to render');
    133 131   
    skipped 70 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Data/hooks/useGraphQLMutation.ts
    1 1  import { useMutation } from 'react-query';
    2 2  import { runGraphQL } from '../../DataSource';
    3 3  import { useHttpClient } from '../../Network';
    4  -import { AxiosResponseHeaders } from 'axios';
    5 4   
    6 5  export function useGraphQLMutation({
    7 6   operationName,
    skipped 2 lines
    10 9   onSuccess,
    11 10  }: {
    12 11   operationName: string;
    13  - headers?: AxiosResponseHeaders;
     12 + headers?: Record<string, string>;
    14 13   onSuccess?: () => void;
    15 14   onError?: (err: Error) => void;
    16 15  }) {
    skipped 22 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/DataSidebar/navigation-utils.ts
    skipped 18 lines
    19 19  export const manageDatabaseUrl = (dataSourceName: string) =>
    20 20   `/data/v2/manage/database?database=${encodeURIComponent(dataSourceName)}`;
    21 21   
     22 +export const managePermissionSummaryUrl = (dataSourceName: string) =>
     23 + `/data/v2/manage/database/permission-summary?database=${encodeURIComponent(
     24 + dataSourceName
     25 + )}`;
     26 + 
    22 27  export const manageFunctionUrl = ({
    23 28   fn,
    24 29   dataSourceName,
    skipped 8 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/DataSource/api.ts
    1  -import { AxiosInstance, AxiosResponseHeaders } from 'axios';
     1 +import { AxiosInstance } from 'axios';
    2 2  import {
    3 3   NativeDrivers,
    4 4   Source,
    skipped 56 lines
    61 61  }: {
    62 62   operationName: string;
    63 63   query: string;
    64  - headers?: AxiosResponseHeaders;
     64 + headers?: Record<string, string>;
    65 65  } & NetworkArgs) => {
    66 66   try {
    67 67   const result = await httpClient.post('v1/graphql', {
    skipped 173 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/DatabaseRelationships/DatabaseRelationships.stories.tsx
    skipped 57 lines
    58 58   const arrayRelationships = await canvas.findAllByText('Array');
    59 59   expect(arrayRelationships).toHaveLength(2);
    60 60   
    61  - expect(await canvas.findByText('AlbumCovers')).toBeVisible();
    62  - expect(await canvas.findByText('Track')).toBeVisible();
     61 + expect(await canvas.findByText('public.AlbumCovers')).toBeVisible();
     62 + expect(await canvas.findByText('public.Track')).toBeVisible();
    63 63   
    64 64   expect(await canvas.findAllByText('Rename')).toHaveLength(2);
    65 65   
    skipped 5 lines
    71 71   expect(await canvas.findByText('SUGGESTED RELATIONSHIPS')).toBeVisible();
    72 72   expect(await canvas.findByText('artist')).toBeVisible();
    73 73   expect(await canvas.findAllByText('Object')).toHaveLength(2);
    74  - expect(await canvas.findAllByText('Artist')).toHaveLength(2);
    75  - 
     74 + expect(await canvas.findAllByText('public.Artist')).toHaveLength(1);
     75 + expect(await canvas.findAllByText('dbo.Artist')).toHaveLength(1);
    76 76   expect(await canvas.findByText('Add')).toBeVisible();
    77 77   });
    78 78   
    skipped 48 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/DatabaseRelationships/utils/helpers.ts
    skipped 23 lines
    24 24   }
    25 25   
    26 26   if (typeof table === 'object' && isSchemaTable(table)) {
    27  - return table.name;
     27 + return [table.schema, table.name].join('.');
    28 28   }
    29 29   
    30 30   if (isObject(table)) {
    skipped 16 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/EETrial/components/ActivateEEForm/SuccessScreen/SuccessScreen.tsx
    skipped 26 lines
    27 27   <FaCheck />
    28 28   </div>
    29 29   <h1 className="text-xl text-slate-900 font-semibold mb-sm">
    30  - Your 30-day trial of Hasura Enterprise has been activated
     30 + Your trial of Hasura Enterprise has been activated
    31 31   </h1>
    32 32   <p className="text-muted mt-0 mb-sm">
    33 33   <strong>What&apos;s next?</strong>
    skipped 42 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/_test_/useListAvailableAgentsFromMetadata.spec.ts
    skipped 44 lines
    45 45   );
    46 46   
    47 47   const expectedResult: DcAgent[] = [
    48  - { name: 'csv', url: 'http://host.docker.internal:8101' },
    49  - { name: 'sqlite', url: 'http://host.docker.internal:8100' },
     48 + { name: 'csv', uri: 'http://host.docker.internal:8101' },
     49 + { name: 'sqlite', uri: 'http://host.docker.internal:8100' },
    50 50   ];
    51 51   
    52 52   await waitFor(() => result.current.isSuccess);
    skipped 5 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/components/AddAgentForm.tsx
    skipped 1 lines
    2 2  import { InputField, SimpleForm } from '../../../new-components/Form';
    3 3  import { z } from 'zod';
    4 4  import { useAddAgent } from '../hooks/useAddAgent';
     5 +import { UrlInput } from './UrlInput';
    5 6   
    6 7  interface CreateAgentFormProps {
    7 8   onClose: () => void;
    8 9   onSuccess?: () => void;
    9 10  }
    10 11   
    11  -const schema = z.object({
     12 +export const schema = z.object({
    12 13   name: z.string().min(1, 'Name is required!'),
    13  - url: z.string().min(1, 'URL is required!'),
     14 + url: z.discriminatedUnion('type', [
     15 + z.object({
     16 + type: z.literal('url'),
     17 + value: z.string().min(1, 'URL is required!'),
     18 + }),
     19 + z.object({
     20 + type: z.literal('envVar'),
     21 + value: z.string().min(1, 'ENV variable is required'),
     22 + }),
     23 + ]),
    14 24  });
    15 25   
    16  -type FormValues = z.infer<typeof schema>;
     26 +export type FormValues = z.infer<typeof schema>;
    17 27   
    18 28  export const AddAgentForm = (props: CreateAgentFormProps) => {
    19 29   const { addAgent, isLoading } = useAddAgent();
    20 30   
    21 31   const handleSubmit = (values: FormValues) => {
    22 32   addAgent({
    23  - ...values,
     33 + name: values.name,
     34 + url:
     35 + values.url.type === 'envVar'
     36 + ? { from_env: values.url.value }
     37 + : values.url.value,
    24 38   }).then(response => {
    25 39   response.makeToast();
    26 40   if (response.status === 'added') {
    skipped 7 lines
    34 48   schema={schema}
    35 49   // something is wrong with type inference with react-hook-form form wrapper. temp until the issue is resolved
    36 50   onSubmit={handleSubmit}
    37  - options={{ defaultValues: { url: '', name: '' } }}
     51 + options={{
     52 + defaultValues: { url: { type: 'envVar', value: '' }, name: '' },
     53 + }}
    38 54   className="py-4"
    39 55   >
    40 56   <div className="bg-white p-6 border border-gray-300 rounded space-y-4 mb-6 max-w-xl">
    skipped 10 lines
    51 67   placeholder="Enter the name of the agent"
    52 68   />
    53 69   
    54  - <InputField
    55  - label="URL"
    56  - name="url"
    57  - type="text"
    58  - tooltip="The URL of the data connector agent"
    59  - placeholder="Enter the URI of the agent"
    60  - />
     70 + <UrlInput />
     71 + 
    61 72   <div className="flex gap-4 justify-end">
    62 73   <Button type="submit" mode="primary" isLoading={isLoading}>
    63 74   Connect
    skipped 18 lines
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/components/ManageAgentsTable.tsx
    skipped 2 lines
    3 3  import { FaTrash } from 'react-icons/fa';
    4 4  import { useListAvailableAgentsFromMetadata } from '../hooks';
    5 5  import { useRemoveAgent } from '../hooks/useRemoveAgent';
     6 +import { DataConnectorUri } from '../../hasura-metadata-types';
    6 7   
    7 8  export const ManageAgentsTable = () => {
    8 9   const { data, isLoading } = useListAvailableAgentsFromMetadata();
    skipped 30 lines
    39 40   {agent.name}
    40 41   </CardedTable.TableBodyCell>
    41 42   <CardedTable.TableBodyCell>
    42  - {agent.url}
     43 + <DataConnectorUriLabel uri={agent.uri} />
    43 44   </CardedTable.TableBodyCell>
    44 45   <CardedTable.TableBodyCell>
    45 46   <div className="flex items-center justify-end whitespace-nowrap text-right opacity-0 transition-all duration-200 group-hover:opacity-100">
    skipped 17 lines
    63 64   );
    64 65  };
    65 66   
     67 +const DataConnectorUriLabel = (props: {
     68 + uri: DataConnectorUri;
     69 +}): JSX.Element => {
     70 + if (typeof props.uri === 'string') {
     71 + return <>{props.uri}</>;
     72 + } else {
     73 + return (
     74 + <>
     75 + Environment Variable: <code>{props.uri.from_env}</code>
     76 + </>
     77 + );
     78 + }
     79 +};
     80 + 
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/components/UrlInput.tsx
     1 +import { useFormContext } from 'react-hook-form';
     2 +import { FormValues } from './AddAgentForm';
     3 +import { InputField, Radio } from '../../../new-components/Form';
     4 + 
     5 +export const UrlInput = () => {
     6 + const { watch } = useFormContext<FormValues>();
     7 + const selectedType = watch('url.type');
     8 + console.log({ selectedType });
     9 + return (
     10 + <div>
     11 + <Radio
     12 + label="URL"
     13 + name="url.type"
     14 + options={[
     15 + { label: 'Using URL value', value: 'url' },
     16 + {
     17 + label: 'Using Environment Variable (recommended)',
     18 + value: 'envVar',
     19 + },
     20 + ]}
     21 + orientation="horizontal"
     22 + />
     23 + {selectedType === 'url' ? (
     24 + <InputField
     25 + label="URL"
     26 + name="url.value"
     27 + type="text"
     28 + tooltip="The URL of the data connector agent"
     29 + placeholder="Enter the URI of the agent"
     30 + />
     31 + ) : (
     32 + <InputField
     33 + label="Environment Variable"
     34 + name="url.value"
     35 + type="text"
     36 + tooltip="The Environment variable that contains the URL of the data connector agent"
     37 + placeholder="DC_AGENT_URL_ENV_VAR"
     38 + />
     39 + )}
     40 + </div>
     41 + );
     42 +};
     43 + 
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/hooks/useAddAgent.ts
    skipped 10 lines
    11 11   // these are properties added to the response for console use
    12 12   error?: Error | null;
    13 13   name: string;
    14  - url: string;
     14 + url: string | { from_env: string };
    15 15   makeToast: () => void;
    16 16   status: 'unavailable' | 'error' | 'already-added' | 'added';
    17 17  };
    18 18   
    19 19  export type AddAgentResponse = AddAgentServerResponse & AddAgentConsoleProps;
    20 20   
    21  -type AddAgentArgs = { name: string; url: string };
     21 +type AddAgentArgs = { name: string; url: string | { from_env: string } };
    22 22   
    23 23  const AGENT_UNAVAILABLE_MESSAGE = 'Agent is not available';
    24 24   
    skipped 130 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/hooks/useListAvailableAgentsFromMetadata.ts
    skipped 24 lines
    25 25   
    26 26   return {
    27 27   name: dcAgentName,
    28  - url: definition.uri,
     28 + uri: definition.uri,
    29 29   };
    30 30   }
    31 31   );
    skipped 12 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/ManageAgents/types.ts
     1 +import { DataConnectorUri } from '../hasura-metadata-types';
     2 + 
    1 3  export type DcAgent = {
    2 4   name: string;
    3  - url: string;
     5 + uri: DataConnectorUri;
    4 6  };
    5 7   
  • ■ ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/Permissions/PermissionsForm/components/RowPermissionsBuilder/components/RowPermissionsInput.stories.tsx
    skipped 74 lines
    75 75   
    76 76   await userEvent.selectOptions(
    77 77   canvas.getByTestId('_exists._table-value-input'),
    78  - 'Label'
     78 + 'public.Label'
    79 79   );
    80 80   
    81 81   await userEvent.selectOptions(
    skipped 37 lines
    119 119   
    120 120   await userEvent.selectOptions(
    121 121   canvas.getByTestId('_exists._table-value-input'),
    122  - 'Label'
     122 + 'public.Label'
    123 123   );
    124 124   
    125 125   await userEvent.selectOptions(
    skipped 3 lines
    129 129   
    130 130   await userEvent.selectOptions(
    131 131   canvas.getByTestId('_exists._where._exists._table-value-input'),
    132  - 'Label'
     132 + 'public.Label'
    133 133   );
    134 134   
    135 135   await userEvent.selectOptions(
    skipped 260 lines
    396 396   
    397 397   await userEvent.selectOptions(
    398 398   canvas.getByTestId('_exists._table-value-input'),
    399  - 'Label'
     399 + 'public.Label'
    400 400   );
    401 401   
    402 402   expect(existElement).not.toHaveAttribute('disabled');
    skipped 133 lines
    536 536   comparators={comparators}
    537 537   logicalModel={undefined}
    538 538   logicalModels={[]}
    539  - permissions={{ Label: { id: { _eq: '' } } }}
     539 + permissions={{ 'Public.Label': { id: { _eq: '' } } }}
    540 540   />
    541 541   ),
    542 542  };
    skipped 757 lines
  • ■ ■ ■ ■ ■
    frontend/libs/console/legacy-ce/src/lib/features/hasura-metadata-types/backendConfigs/backendConfigs.ts
    1 1  export type BackendConfigs = {
    2  - dataconnector: Record<string, { uri: string }>;
     2 + dataconnector: Record<string, DataConnectorBackendConfig>;
    3 3  };
    4 4   
     5 +export type DataConnectorBackendConfig = {
     6 + uri: DataConnectorUri;
     7 +};
     8 + 
     9 +export type DataConnectorUri = string | { from_env: string };
     10 + 
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ee/src/lib/components/Services/Metrics/Common/Modal.js
    skipped 242 lines
    243 243   <div className="rounded bg-white text-sm">
    244 244   <CustomCopy
    245 245   label={
    246  - <LabelValue className="inline-block" label="Generated SQL" />
     246 + <LabelValue className="inline-block" label="Generated Query" />
    247 247   }
    248 248   copy={JSON.stringify(generatedSql, null, 2)}
    249 249   displayColon={false}
    skipped 128 lines
  • ■ ■ ■ ■
    frontend/libs/console/legacy-ee/src/lib/components/Services/Metrics/SubscriptionWorkers/Modal.js
    skipped 62 lines
    63 63   const renderGeneratedSql = () => {
    64 64   if (generatedSql) {
    65 65   try {
    66  - return <CustomCopy label={'GENERATED SQL'} copy={generatedSql} />;
     66 + return <CustomCopy label={'GENERATED QUERY'} copy={generatedSql} />;
    67 67   } catch (e) {
    68 68   console.error(e);
    69 69   }
    skipped 133 lines
  • ■ ■ ■ ■ ■ ■
    frontend/package.json
    skipped 69 lines
    70 70   "apollo-link-http": "^1.5.16",
    71 71   "apollo-link-ws": "1.0.20",
    72 72   "await-to-js": "^3.0.0",
    73  - "axios": "0.27.2",
     73 + "axios": "1.6.0",
    74 74   "babel-plugin-transform-runtime": "^6.23.0",
    75 75   "brace": "0.11.1",
    76 76   "browser-hrtime": "^1.1.8",
    skipped 8 lines
    85 85   "dom-parser": "0.1.6",
    86 86   "form-urlencoded": "^6.1.0",
    87 87   "format-graphql": "^1.4.0",
    88  - "graphiql": "1.0.0-alpha.0",
     88 + "graphiql": "1.4.7",
    89 89   "graphiql-code-exporter": "2.0.8",
    90 90   "graphiql-explorer": "0.6.2",
    91 91   "graphql": "14.5.8",
    skipped 89 lines
    181 181   "uuid": "8.3.2",
    182 182   "valid-url": "1.0.9",
    183 183   "xstate": "^4.30.1",
    184  - "zod": "3.20.2",
     184 + "zod": "3.22.3",
    185 185   "zustand": "^4.3.6"
    186 186   },
    187 187   "devDependencies": {
    188  - "@babel/core": "7.12.13",
     188 + "@babel/core": "^7.23.2",
    189 189   "@babel/eslint-parser": "^7.19.1",
    190 190   "@babel/plugin-proposal-class-properties": "7.8.3",
    191 191   "@babel/plugin-proposal-export-default-from": "^7.18.10",
    skipped 163 lines
    355 355   "msw-storybook-addon": "^1.6.3",
    356 356   "nx": "15.8.1",
    357 357   "nyc": "15.0.1",
    358  - "octokit": "^3.0.0",
     358 + "octokit": "^3.1.2",
    359 359   "os-browserify": "^0.3.0",
    360 360   "path-browserify": "^1.0.1",
    361 361   "postcss": "8.4.19",
    skipped 32 lines
  • frontend/yarn.lock
    Unable to diff as the file is too large.
  • ■ ■ ■ ■
    install-manifests/azure-container/azuredeploy.json
    skipped 54 lines
    55 55   "dbName": "[parameters('postgresDatabaseName')]",
    56 56   "containerGroupName": "[concat(parameters('name'), '-container-group')]",
    57 57   "containerName": "hasura-graphql-engine",
    58  - "containerImage": "hasura/graphql-engine:v2.36.0"
     58 + "containerImage": "hasura/graphql-engine:v2.37.0"
    59 59   },
    60 60   "resources": [
    61 61   {
    skipped 70 lines
  • ■ ■ ■ ■
    install-manifests/azure-container-with-pg/azuredeploy.json
    skipped 97 lines
    98 98   "firewallRuleName": "allow-all-azure-firewall-rule",
    99 99   "containerGroupName": "[concat(parameters('name'), '-container-group')]",
    100 100   "containerName": "hasura-graphql-engine",
    101  - "containerImage": "hasura/graphql-engine:v2.36.0"
     101 + "containerImage": "hasura/graphql-engine:v2.37.0"
    102 102   },
    103 103   "resources": [
    104 104   {
    skipped 122 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/docker-compose/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.36.0
     11 + image: hasura/graphql-engine:v2.37.0
    12 12   ports:
    13 13   - "8080:8080"
    14 14   restart: always
    skipped 16 lines
    31 31   data-connector-agent:
    32 32   condition: service_healthy
    33 33   data-connector-agent:
    34  - image: hasura/graphql-data-connector:v2.36.0
     34 + image: hasura/graphql-data-connector:v2.37.0
    35 35   restart: always
    36 36   ports:
    37 37   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-cockroach/docker-compose.yaml
    skipped 26 lines
    27 27   - "${PWD}/cockroach-data:/cockroach/cockroach-data"
    28 28   
    29 29   graphql-engine:
    30  - image: hasura/graphql-engine:v2.36.0
     30 + image: hasura/graphql-engine:v2.37.0
    31 31   ports:
    32 32   - "8080:8080"
    33 33   depends_on:
    skipped 19 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-https/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.36.0
     11 + image: hasura/graphql-engine:v2.37.0
    12 12   depends_on:
    13 13   - "postgres"
    14 14   restart: always
    skipped 29 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-ms-sql-server/docker-compose.yaml
    skipped 14 lines
    15 15   volumes:
    16 16   - mssql_data:/var/opt/mssql
    17 17   graphql-engine:
    18  - image: hasura/graphql-engine:v2.36.0
     18 + image: hasura/graphql-engine:v2.37.0
    19 19   ports:
    20 20   - "8080:8080"
    21 21   depends_on:
    skipped 21 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-pgadmin/docker-compose.yaml
    skipped 18 lines
    19 19   PGADMIN_DEFAULT_EMAIL: [email protected]
    20 20   PGADMIN_DEFAULT_PASSWORD: admin
    21 21   graphql-engine:
    22  - image: hasura/graphql-engine:v2.36.0
     22 + image: hasura/graphql-engine:v2.37.0
    23 23   ports:
    24 24   - "8080:8080"
    25 25   depends_on:
    skipped 17 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-postgis/docker-compose.yaml
    skipped 7 lines
    8 8   environment:
    9 9   POSTGRES_PASSWORD: postgrespassword
    10 10   graphql-engine:
    11  - image: hasura/graphql-engine:v2.36.0
     11 + image: hasura/graphql-engine:v2.37.0
    12 12   ports:
    13 13   - "8080:8080"
    14 14   depends_on:
    skipped 16 lines
  • ■ ■ ■ ■
    install-manifests/docker-compose-yugabyte/docker-compose.yaml
    skipped 22 lines
    23 23   - yugabyte-data:/var/lib/postgresql/data
    24 24   
    25 25   graphql-engine:
    26  - image: hasura/graphql-engine:v2.36.0
     26 + image: hasura/graphql-engine:v2.37.0
    27 27   ports:
    28 28   - "8080:8080"
    29 29   depends_on:
    skipped 22 lines
  • ■ ■ ■ ■
    install-manifests/docker-run/docker-run.sh
    skipped 2 lines
    3 3   -e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password@hostname:port/dbname \
    4 4   -e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
    5 5   -e HASURA_GRAPHQL_DEV_MODE=true \
    6  - hasura/graphql-engine:v2.36.0
     6 + hasura/graphql-engine:v2.37.0
    7 7   
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/athena/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/enterprise/aws-ecs/hasura-fargate-task.json
    skipped 3 lines
    4 4   "containerDefinitions": [
    5 5   {
    6 6   "name": "hasura",
    7  - "image": "hasura/graphql-engine:v2.36.0",
     7 + "image": "hasura/graphql-engine:v2.37.0",
    8 8   "portMappings": [
    9 9   {
    10 10   "hostPort": 8080,
    skipped 67 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/clickhouse/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/clickhouse-data-connector:v2.36.0
     51 + image: hasura/clickhouse-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8080:8081
    skipped 9 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/docker-compose/docker-compose.yaml
    skipped 14 lines
    15 15   environment:
    16 16   POSTGRES_PASSWORD: postgrespassword
    17 17   graphql-engine:
    18  - image: hasura/graphql-engine:v2.36.0
     18 + image: hasura/graphql-engine:v2.37.0
    19 19   ports:
    20 20   - "8080:8080"
    21 21   restart: always
    skipped 25 lines
    47 47   data-connector-agent:
    48 48   condition: service_healthy
    49 49   data-connector-agent:
    50  - image: hasura/graphql-data-connector:v2.36.0
     50 + image: hasura/graphql-data-connector:v2.37.0
    51 51   restart: always
    52 52   ports:
    53 53   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/enterprise/kubernetes/deployment.yaml
    skipped 17 lines
    18 18   fsGroup: 1001
    19 19   runAsUser: 1001
    20 20   containers:
    21  - - image: hasura/graphql-engine:v2.36.0
     21 + - image: hasura/graphql-engine:v2.37.0
    22 22   imagePullPolicy: IfNotPresent
    23 23   name: hasura
    24 24   readinessProbe:
    skipped 80 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mariadb/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 19 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mongodb/docker-compose.yaml
    skipped 29 lines
    30 30   MONGO_INITDB_ROOT_USERNAME: mongouser
    31 31   MONGO_INITDB_ROOT_PASSWORD: mongopassword
    32 32   hasura:
    33  - image: hasura/graphql-engine:v2.36.0
     33 + image: hasura/graphql-engine:v2.37.0
    34 34   restart: always
    35 35   ports:
    36 36   - 8080:8080
    skipped 23 lines
    60 60   postgres:
    61 61   condition: service_healthy
    62 62   mongo-data-connector:
    63  - image: hasura/mongo-data-connector:v2.36.0
     63 + image: hasura/mongo-data-connector:v2.37.0
    64 64   ports:
    65 65   - 3000:3000
    66 66  volumes:
    skipped 3 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/mysql/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 22 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/oracle/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 21 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/redshift/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■ ■ ■
    install-manifests/enterprise/snowflake/docker-compose.yaml
    skipped 12 lines
    13 13   environment:
    14 14   POSTGRES_PASSWORD: postgrespassword
    15 15   hasura:
    16  - image: hasura/graphql-engine:v2.36.0
     16 + image: hasura/graphql-engine:v2.37.0
    17 17   restart: always
    18 18   ports:
    19 19   - 8080:8080
    skipped 28 lines
    48 48   data-connector-agent:
    49 49   condition: service_healthy
    50 50   data-connector-agent:
    51  - image: hasura/graphql-data-connector:v2.36.0
     51 + image: hasura/graphql-data-connector:v2.37.0
    52 52   restart: always
    53 53   ports:
    54 54   - 8081:8081
    skipped 14 lines
  • ■ ■ ■ ■
    install-manifests/google-cloud-k8s-sql/deployment.yaml
    skipped 15 lines
    16 16   spec:
    17 17   containers:
    18 18   - name: graphql-engine
    19  - image: hasura/graphql-engine:v2.36.0
     19 + image: hasura/graphql-engine:v2.37.0
    20 20   ports:
    21 21   - containerPort: 8080
    22 22   readinessProbe:
    skipped 60 lines
  • ■ ■ ■ ■
    install-manifests/kubernetes/deployment.yaml
    skipped 17 lines
    18 18   app: hasura
    19 19   spec:
    20 20   containers:
    21  - - image: hasura/graphql-engine:v2.36.0
     21 + - image: hasura/graphql-engine:v2.37.0
    22 22   imagePullPolicy: IfNotPresent
    23 23   name: hasura
    24 24   env:
    skipped 22 lines
  • ■ ■ ■ ■ ■
    metadata.openapi.json
    skipped 3753 lines
    3754 3754   "type": "string"
    3755 3755   },
    3756 3756   "uri": {
    3757  - "type": "string"
     3757 + "oneOf": [
     3758 + {
     3759 + "type": "string"
     3760 + },
     3761 + {
     3762 + "$ref": "#/components/schemas/FromEnv"
     3763 + }
     3764 + ]
    3758 3765   }
    3759 3766   },
    3760 3767   "required": [
    skipped 5987 lines
  • ■ ■ ■ ■
    packaging/cli-migrations/v3/docker-entrypoint.sh
    skipped 68 lines
    69 69   
    70 70  # apply metadata if the directory exist
    71 71  if [ -d "$HASURA_GRAPHQL_METADATA_DIR" ]; then
    72  - rm -rf "TEMP_PROJECT_DIR"
     72 + rm -rf "$TEMP_PROJECT_DIR"
    73 73   log "migrations-apply" "applying metadata from $HASURA_GRAPHQL_METADATA_DIR"
    74 74   mkdir -p "$TEMP_PROJECT_DIR"
    75 75   cp -a "$HASURA_GRAPHQL_METADATA_DIR/." "$TEMP_PROJECT_DIR/metadata/"
    skipped 32 lines
  • ■ ■ ■ ■ ■ ■
    packaging/graphql-engine-base/ubi.dockerfile
    1  -# DATE VERSION: 2023-11-27
     1 +# DATE VERSION: 2023-12-21
    2 2  # Modify the above date version (YYYY-MM-DD) if you want to rebuild the image
    3 3   
    4  -FROM registry.access.redhat.com/ubi9-minimal:9.3-1361.1699548032 as pg_dump_source
     4 +FROM registry.access.redhat.com/ubi9/ubi-minimal:9.3-1475 as pg_dump_source
    5 5   
    6 6  ARG TARGETPLATFORM
    7 7   
    skipped 5 lines
    13 13   fi; \
    14 14   microdnf install -y postgresql16-server
    15 15   
    16  -FROM registry.access.redhat.com/ubi9-minimal:9.3-1361.1699548032
     16 +FROM registry.access.redhat.com/ubi9/ubi-minimal:9.3-1475
    17 17   
    18 18  ARG TARGETPLATFORM
    19 19   
    skipped 33 lines
  • ■ ■ ■ ■ ■ ■
    packaging/graphql-engine-base/ubuntu.dockerfile
    1  -# DATE VERSION: 2023-10-21
     1 +# DATE VERSION: 2024-01-23
    2 2  # Modify the above date version (YYYY-MM-DD) if you want to rebuild the image
    3 3   
    4  -FROM ubuntu:jammy-20231004
     4 +FROM ubuntu:jammy-20240111
    5 5   
    6 6  ### NOTE! Shared libraries here need to be kept in sync with `server-builder.dockerfile`!
    7 7   
    skipped 47 lines
  • rfcs/v3/command-mutations/basic.jpg
  • rfcs/v3/command-mutations/non-blocking.jpg
  • ■ ■ ■ ■ ■ ■
    rfcs/v3/command-mutations.md
     1 +# Commands
     2 + 
     3 +While reads are very well-suited to the "models" abstraction in V3, writes seem trickier. We have a bunch of problems to worry about (transactions, foreign key constraints, eventual consistency) that can't be particularly neatly described within the framework of GraphQL. Because of this, we seem to be talking more and more about a command graph as distinct from a read graph. This would be a set of commands that determine what we can do as distinct commands (much like commands in the CQRS/ES sense). Here's an idea about what this could look like.
     4 + 
     5 +An example
     6 +----------
     7 + 
     8 +![](./command-mutations/basic.jpg)
     9 + 
     10 +Let's imagine I want to insert a user called Tom who likes dogs. To do this, I want to check that the email address is valid using some external API, and that no one else has the same username. I then want to verify their email address. In this model:
     11 + 
     12 +- I write a command handler (create_user) in the [TS connector](https://github.com/hasura/ndc-typescript-deno), which takes the user, does the validation, and tries to execute some PG connector commands:
     13 + 
     14 +- unique_username: a check to see whether the username is available
     15 + 
     16 +- add_user: an insert command to add the user
     17 + 
     18 +- The TS connector makes some command calls to ndc_postgres (via the same engine endpoint as the user's original command).
     19 + 
     20 +- I write the commands in ndc_postgres, which turns each command into a SQL statement, runs it, and returns results/errors. Note that this doesn't involve outside I/O - that is handled by the TS connector.
     21 + 
     22 +- The TS connector receives this response, sends the verification email, and returns the command response (the username and user_id).
     23 + 
     24 +What if I want non-blocking writes?
     25 +-----------------------------------
     26 + 
     27 +![](./command-mutations/non-blocking.jpg)
     28 + 
     29 + 
     30 +It's basically the same, except we have an optional `on_complete` and `on_error` handler, and if these are included, we're into continuation country - these commands get issued when the others succeed/fail, and execute_command_set can return immediately. Not necessarily a first release feature, but a nice second release feature (and very appealing to anyone who does want to build event sourcing with Hasura).
     31 + 
     32 +What if I want transactions?
     33 +----------------------------
     34 + 
     35 +We can have something like transactional_command_set. As part of the engine's request validation, we can ensure that every command in that set will execute on the same datasource (and that this datasource supports transactions). All we're really doing here is extending the command vocabulary of the engine.
     36 + 
     37 +What if I want nested inserts?
     38 +------------------------------
     39 + 
     40 +You may find that, in many cases, manipulating data atomically might be more pertinent than specifically aiming for nested inserts. The reason for this lies in the complexities associated with nested inserts, such as managing foreign key constraints and determining the order of data insertion. However, in our proposed model, the user has the flexibility to design their commands in a way that best suits their requirements, keeping in mind the intricacies of nested operations. This approach allows users to have fine-grained control over the data manipulation process.
     41 + 
     42 +What if I want event sourcing?
     43 +------------------------------
     44 + 
     45 +Go for it - the ndc_postgres (or ndc_kafka or whatever) commands can just write to an event log, and something else can do the projection to a read model. We're making no assumptions here, so the user is free to handle things like consistency / conflict resolution in their own code.
     46 + 
     47 +What if I want multi-connector atomicity?
     48 +-----------------------------------------
     49 + 
     50 +This is difficult in general, as this concept won't exist for most (read: any) backends. However, a possible solution is to add a mutex to the TypeScript command that guarantees only one execution of a given command (or set of conflicting commands) at any given time. The pro is that you can manipulate as many sources (via as many connectors) as you want, but the con is that you could get into trouble if any other outside program mutates one of your data sources. With enough caveats, though, this is something a user could implement via the TS connector.
     51 + 
     52 +What if I want arbitrary validation?
     53 +------------------------------------
     54 + 
     55 +Let's imagine I want to forbid users from calling add_user on the ndc-postgres connector directly, and instead require them to go via the TypeScript connector (for example, maybe I need to check their email address against a spam directory online).
     56 + 
     57 +Our proposal was that we take a model similar to Docker networking: permissions apply to users when they reach the border of the Hasura cluster, but once you're inside, permissions are not re-applied. So, if the user doesn't have permission to call add_user, the TS connector can call add_user because it's calling the engine from within the network.
     58 + 
     59 +---
     60 + 
     61 +## Ongoing Considerations and Research
     62 + 
     63 +### Point Mutations in ndc-postgres
     64 + 
     65 +We propose the addition of point mutations (insert, update, delete) to ndc-postgres as procedures. This enhancement is designed to be compatible with the existing architecture and poses minimal risk. With this addition, along with the concept of "native mutations," we believe we achieve a robust and expressive solution for working with PostgreSQL.
     66 + 
     67 +### Enhancements to NDC Mutations
     68 + 
     69 +Continuing forward, we are exploring options to enhance NDC mutations to facilitate no-code mutations for users who find value in them. Proposed additions include:
     70 + 
     71 +- **Boolexps as Input Arguments:** Allow boolexps as input arguments for more dynamic and flexible mutations.
     72 +- **SELECT INTO Proposal:** Explore the possibility of introducing a "select into" proposal to enrich mutation capabilities.
     73 +- **Mutations with Constraints Proposal:** Considerations for mutations with constraints to provide a more nuanced mutation experience.
     74 + 
     75 +These proposals are subject to further exploration, and their implementation can be gated behind capabilities, ensuring a controlled rollout. Product research will be conducted to determine the feasibility and usefulness of these enhancements.
     76 + 
     77 +### TypeScript/Wasm Options and Transaction Support
     78 + 
     79 +Simultaneously, ongoing research and discussions are in progress regarding:
     80 + 
     81 +- **TypeScript/Wasm Options:** Exploring options for scripting complex mutations using TypeScript or WebAssembly.
     82 +- **Support for Transactions:** Investigating the feasibility and potential implementations for transaction support.
     83 + 
     84 +These initiatives reflect our commitment to continuous improvement and innovation within the Hasura ecosystem. Your feedback and insights are welcomed as we navigate these evolving aspects of our platform.
  • ■ ■ ■ ■ ■ ■
    rfcs/v3-descriptions.md
     1 + 
     2 +# Introduction
     3 + 
     4 +This RFC proposes adding support to adding descriptions to open DD objects which will ultimately show up in the description of different types of entities in the GraphQL schema.
     5 + 
     6 +Description can be added to the following user facing types:
     7 + 
     8 +1. Scalars
     9 +2. Objects
     10 +3. Models
     11 +4. Commands
     12 +5. Relationships
     13 + 
     14 +Descriptions can be useful to the users of the GraphQL schema to understand a type/root field/argument better.
     15 + 
     16 + 
     17 +## Adding a description field to the metadata object
     18 + 
     19 + 
     20 +### `ScalarType` metadata object
     21 + 
     22 +The scalar type can only have one type of description associated with it, i.e.
     23 +the description of the GraphQL type.
     24 + 
     25 +```json
     26 +{
     27 + "kind": "ScalarType",
     28 + "version": "v1",
     29 + "definition": {
     30 + "name": "NonNegativeInt",
     31 + "description": "Type to represent integers that are greater than or equal to 0",
     32 + "graphql": {
     33 + "typeName": "NonNegativeInt"
     34 + }
     35 + }
     36 +}
     37 +```
     38 + 
     39 +The description will show up in the GraphQL schema, as following:
     40 + 
     41 + 
     42 +```graphql
     43 +"""
     44 +Type to represent integers that are greater than or equal to 0
     45 +"""
     46 +scalar NonNegativeInt
     47 +```
     48 + 
     49 +### `ObjectType` metadata object
     50 + 
     51 +The `ObjectType` can have two kinds of descriptions:
     52 + 
     53 +1. Description of the object type.
     54 +2. Description of the individual fields.
     55 + 
     56 +```json
     57 +{
     58 + "kind": "ObjectType",
     59 + "version": "v1",
     60 + "definition": {
     61 + "name": "author",
     62 + "description": "Author object containing unique identifier and name.",
     63 + "fields": [
     64 + {
     65 + "name": "author_id",
     66 + "type": "CustomInt!",
     67 + "description": "ID to uniquely identify an author."
     68 + },
     69 + {
     70 + "name": "first_name",
     71 + "type": "String!"
     72 + },
     73 + {
     74 + "name": "last_name",
     75 + "type": "String!"
     76 + }
     77 + ],
     78 + "globalIdFields": [
     79 + "author_id"
     80 + ],
     81 + "graphql": {
     82 + "typeName": "Author"
     83 + }
     84 + }
     85 +}
     86 +```
     87 + 
     88 +The description will show up in the GraphQL schema, as following:
     89 + 
     90 +```graphql
     91 +"""
     92 +Author object containing unique identifier and name.
     93 +"""
     94 +type Author {
     95 + """
     96 + ID to uniquely identify an author.
     97 + """
     98 + author_id: CustomInt!,
     99 + first_name: String!,
     100 + last_name: String!
     101 +}
     102 +```
     103 + 
     104 +### `Model` metadata object
     105 + 
     106 +A model can have three diffrent types of descriptions, the number of descriptions correspond
     107 +to the number of GraphQL APIs that are chosen to expose. At the moment of writing this
     108 +document, two types of GraphQL APIs are supported, `select_many` and `select_one`.
     109 + 
     110 +Models can also accept arguments and the argument fields can also accept descriptions.
     111 + 
     112 + 
     113 +```json
     114 +{
     115 + "kind": "Model",
     116 + "version": "v1",
     117 + "definition": {
     118 + "name": "Authors",
     119 + "objectType": "author",
     120 + "globalIdSource": true,
     121 + "arguments": [
     122 + {
     123 + "name": "include_inactive",
     124 + "type": "boolean",
     125 + "description": "If set to true, returns authors even if they are inactive."
     126 + }
     127 + ],
     128 + "graphql": {
     129 + "selectUniques": [
     130 + {
     131 + "queryRootField": "AuthorByID",
     132 + "description": "Returns at most an author identified by the given author_id, returns null if author is not found with the provided ID.",
     133 + "uniqueIdentifier": [
     134 + "author_id"
     135 + ]
     136 + }
     137 + ],
     138 + "selectMany": {
     139 + "queryRootField": "AuthorMany",
     140 + "description": "Selects multiple authors and supports filtering and pagination."
     141 + },
     142 + "filterExpressionType": "Author_Where_Exp",
     143 + "orderByExpressionType": "Author_Order_By",
     144 + "argumentsInputType": "Author_Arguments"
     145 + },
     146 + "filterableFields": [
     147 + "author_id"
     148 + ],
     149 + "orderableFields": [
     150 + "author_id"
     151 + ]
     152 + }
     153 +}
     154 +```
     155 + 
     156 +The descriptions will show up in the GraphQL schema, as following:
     157 + 
     158 +```graphql
     159 + 
     160 + 
     161 +type Query {
     162 + """
     163 + Selects multiple authors and supports filtering and pagination.
     164 + """
     165 + AuthorMany(
     166 + args: Author_Arguments,
     167 + where: Author_Where_Exp,
     168 + limit: Int,
     169 + offset: Int,
     170 + order_by: Author_Order_By): [Author!]
     171 + """
     172 + Returns at most an author identified by the given author_id, returns null if author is not found with the provided ID.
     173 + """
     174 + AuthorByID(
     175 + author_id: Int!,
     176 + """If set to true, returns authors even if they are inactive."""
     177 + include_inactive: boolean
     178 + ): Author
     179 +}
     180 +```
     181 + 
     182 +### `Command` metadata object
     183 + 
     184 +Commands can have two kinds of descriptions. One is the description of the
     185 +root field that will be exposed by the command and second is the description
     186 +of the input arguments to the command.
     187 + 
     188 +```json
     189 +{
     190 + "kind": "Command",
     191 + "version": "v1",
     192 + "definition": {
     193 + "name": "get_article_by_id",
     194 + "description": "Command to get an article by using the ID.",
     195 + "arguments": [
     196 + {
     197 + "name": "article_id",
     198 + "type": "Int!",
     199 + "description": "ID of the article."
     200 + }
     201 + ],
     202 + "outputType": "commandArticle",
     203 + "graphql": {
     204 + "rootFieldName": "getArticleById",
     205 + "rootFieldKind": "Query"
     206 + }
     207 + }
     208 +}
     209 +```
     210 + 
     211 +The descriptions will show up in the GraphQL schema, as following:
     212 + 
     213 +```graphql
     214 +type Query {
     215 + """
     216 + Command to get an article by using the ID.
     217 + """
     218 + getArticleById(
     219 + "ID of the article."
     220 + article_id: Int!
     221 + )
     222 +}
     223 +```
     224 + 
     225 +### `Relationship` metadata object
     226 + 
     227 +Relationships can have one kind of description. This description will become the
     228 +GraphQL description of the relationship field.
     229 + 
     230 + 
     231 +```json
     232 +{
     233 + "kind": "Relationship",
     234 + "version": "v1",
     235 + "definition": {
     236 + "source": "author",
     237 + "name": "Articles",
     238 + "description": "Fetches the corresponding articles of the author.",
     239 + "target": {
     240 + "model": {
     241 + "name": "Articles",
     242 + "relationshipType": "Array"
     243 + }
     244 + },
     245 + "mapping": [
     246 + {
     247 + "source": {
     248 + "fieldPath": [
     249 + {
     250 + "fieldName": "article_id"
     251 + }
     252 + ]
     253 + },
     254 + "target": {
     255 + "modelField": [
     256 + {
     257 + "fieldName": "article_id"
     258 + }
     259 + ]
     260 + }
     261 + }
     262 + ]
     263 + }
     264 +}
     265 + 
     266 +```
     267 + 
     268 +The descriptions will show up in the GraphQL schema, as following:
     269 + 
     270 +```graphql
     271 + type author {
     272 + author_id: Int!,
     273 + article_id: Int!,
     274 + """
     275 + Fetches the corresponding articles of the author.
     276 + """
     277 + Articles: [Articles!]
     278 + }
     279 +```
     280 + 
  • ■ ■ ■ ■ ■ ■
    scripts/dev.sh
    skipped 106 lines
    107 107   if [ "$EDITION_NAME" = "graphql-engine-pro" ];then
    108 108   EDITION_ABBREV=ee
    109 109   if [ -z "${HASURA_GRAPHQL_EE_LICENSE_KEY-}" ]; then
    110  - echo_error "You need to have the HASURA_GRAPHQL_EE_LICENSE_KEY environment variable defined."
    111  - echo_error "Ask a pro developer for the dev key."
    112  - exit 1
     110 + echo_warn "You don't have the HASURA_GRAPHQL_EE_LICENSE_KEY environment variable defined."
     111 + echo_warn "Ask a pro developer for the dev key."
     112 + echo_warn " Or: Press enter to continue with the pro binary in non-pro mode [will proceed in 15s]"
     113 + read -r -t15 || true
    113 114   fi
    114 115   # This is required for pro with EE license available:
    115 116   if [ -z "${HASURA_GRAPHQL_ADMIN_SECRET-}" ]; then
    skipped 748 lines
  • ■ ■ ■ ■ ■ ■
    scripts/haskell-transitive-dependency-import-audit.sh
     1 +#!/usr/bin/env bash
     2 +set -euo pipefail
     3 +shopt -s globstar
     4 + 
     5 +## This tries to audit our transitive dependencies for occurrences of
     6 +## problematic imports or function names. very basic for now, can be
     7 +## extended. For now depends on ripgrep.
     8 +if [ -z "$1" ]; then
     9 + echo "pass search string as first argument"
     10 + exit 1
     11 +fi
     12 + 
     13 +REPO_TOPLEVEL=$(git rev-parse --show-toplevel)
     14 +FREEZE_FILE="$REPO_TOPLEVEL/cabal.project.freeze"
     15 + 
     16 +if [ ! -f "$FREEZE_FILE" ]; then
     17 + echo "Freeze file not found"
     18 + exit 1
     19 +fi
     20 + 
     21 +# Temp dir in RAM so we don't thrash SSD
     22 +TEMP_DIR=$(mktemp -d /dev/shm/hasura_dep_audit.XXXXXX)
     23 +function cleanup {
     24 + rmdir "$TEMP_DIR" || echo "$TEMP_DIR was not empty and could not be removed so it probably contains matching libraries you'll want to check out by hand"
     25 +}
     26 +trap cleanup EXIT
     27 + 
     28 +# Read the freeze file and extract package names and versions
     29 +rg '^.* any\.([^ ]*) ==([^,]*),?' -r '$1-$2' "$FREEZE_FILE" | while read -r pkg_identifier; do
     30 + # Download the package
     31 + cabal get -d "$TEMP_DIR" "$pkg_identifier" >/dev/null || echo " continuing anyway..."
     32 + 
     33 + if rg -q "$1" -ths "${TEMP_DIR:?}/$pkg_identifier"; then
     34 + echo
     35 + echo "Occurrence in $pkg_identifier"
     36 + else
     37 + echo -n .
     38 + # Clean up if nothing to see
     39 + rm -rf "${TEMP_DIR:?}/$pkg_identifier"
     40 + fi
     41 +done
     42 + 
     43 + 
  • ■ ■ ■ ■
    server/VERSIONS.json
    1 1  {
    2 2   "cabal-install": "3.10.1.0",
    3  - "ghc": "9.4.5",
     3 + "ghc": "9.6.4",
    4 4   "hlint": "3.6.1",
    5 5   "ormolu": "0.7.2.0"
    6 6  }
    skipped 1 lines
  • ■ ■ ■ ■ ■
    server/forks/hedis/hedis.cabal
    skipped 90 lines
    91 91   errors,
    92 92   network-uri,
    93 93   unliftio-core,
    94  - random
     94 + random,
     95 + uri-encode
    95 96   if !impl(ghc >= 8.0)
    96 97   build-depends:
    97 98   semigroups >= 0.11 && < 0.19
    skipped 93 lines
  • ■ ■ ■ ■ ■
    server/forks/hedis/src/Database/Redis/PubSub.hs
    skipped 39 lines
    40 40  import qualified Database.Redis.Core as Core
    41 41  import qualified Database.Redis.Connection as Connection
    42 42  import qualified Database.Redis.ProtocolPipelining as PP
     43 +import GHC.Conc
    43 44  import Database.Redis.Protocol (Reply(..), renderRequest)
    44 45  import Database.Redis.Types
    45 46   
    skipped 443 lines
    489 490  -- This is the only thread which ever receives data from the underlying
    490 491  -- connection.
    491 492  listenThread :: PubSubController -> PP.Connection -> IO ()
    492  -listenThread ctrl rawConn = forever $ do
     493 +listenThread ctrl rawConn = do
     494 + labelMe "Redis listenThread"
     495 + forever $ do
    493 496   msg <- PP.recv rawConn
    494 497   case decodeMsg msg of
    495 498   Msg (Message channel msgCt) -> do
    skipped 15 lines
    511 514  -- This is the only thread which ever sends data on the underlying
    512 515  -- connection.
    513 516  sendThread :: PubSubController -> PP.Connection -> IO ()
    514  -sendThread ctrl rawConn = forever $ do
     517 +sendThread ctrl rawConn = do
     518 + labelMe "Redis sendThread"
     519 + forever $ do
    515 520   PubSub{..} <- atomically $ readTBQueue (sendChanges ctrl)
    516 521   rawSendCmd rawConn subs
    517 522   rawSendCmd rawConn unsubs
    skipped 120 lines
    638 643  -- be used for the other. In particular, be aware that they use different utility functions to subscribe
    639 644  -- and unsubscribe to channels.
    640 645   
     646 +labelMe :: MonadIO m=> String -> m ()
     647 +labelMe l = liftIO (myThreadId >>= flip labelThread l)
     648 + 
  • ■ ■ ■ ■ ■
    server/forks/hedis/src/Database/Redis/Sentinel.hs
    skipped 43 lines
    44 44  import Control.Concurrent
    45 45  import Control.Exception (Exception, IOException, evaluate, throwIO)
    46 46  import Control.Monad
     47 +import Control.Monad.IO.Class
    47 48  import Control.Monad.Catch (Handler (..), MonadCatch, catches, throwM)
    48 49  import Control.Monad.Except
    49 50  import Data.ByteString (ByteString)
    skipped 173 lines
  • ■ ■ ■ ■ ■
    server/forks/hedis/src/Database/Redis/URL.hs
    skipped 14 lines
    15 15  import qualified Database.Redis.ConnectionContext as CC
    16 16  import Network.HTTP.Base
    17 17  import Network.URI (parseURI, uriPath, uriScheme)
     18 +import Network.URI.Encode (decode)
    18 19  import Text.Read (readMaybe)
    19 20   
    20 21  import qualified Data.ByteString.Char8 as C8
    skipped 39 lines
    60 61   then connectHost defaultConnectInfo
    61 62   else h
    62 63   , connectPort = maybe (connectPort defaultConnectInfo) (CC.PortNumber . fromIntegral) (port uriAuth)
    63  - , connectAuth = C8.pack <$> password uriAuth
     64 + , connectAuth = (C8.pack . decode) <$> password uriAuth
    64 65   , connectDatabase = db
    65 66   }
    66 67   
  • ■ ■ ■ ■ ■ ■
    server/graphql-engine.cabal
    skipped 190 lines
    191 191   -- in the graphql-engine 'executable' stanza below, and in any other dependent
    192 192   -- executables (See mono #2610):
    193 193   -fexpose-all-unfoldings
    194  - -- Use O1 over O2, as (as of writing) it improves compile times a little, without
    195  - -- hurting performance:
    196  - -O1
    197  - -- This seems like a better default for us, lowering memory residency without
    198  - -- impacting compile times too much, though it does increase binary size:
    199  - -funfolding-use-threshold=640
     194 + -O2
     195 + -- This is lowered to limit compile time and binary size (default 80)
     196 + -funfolding-use-threshold=40
    200 197   else
    201 198   -- we just want to build fast:
    202 199   ghc-options: -O0
    skipped 1106 lines
  • ■ ■ ■ ■ ■
    server/lib/api-tests/api-tests.cabal
    skipped 136 lines
    137 137   Test.DataConnector.QuerySpec
    138 138   Test.DataConnector.SelectPermissionsSpec
    139 139   Test.Databases.BigQuery.Queries.SpatialTypesSpec
     140 + Test.Databases.BigQuery.Queries.TextFunctionsSpec
    140 141   Test.Databases.BigQuery.Queries.TypeInterpretationSpec
    141 142   Test.Databases.BigQuery.Schema.ComputedFields.TableSpec
    142 143   Test.Databases.Postgres.ArraySpec
    skipped 255 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/api-tests/src/Test/Databases/BigQuery/Queries/TextFunctionsSpec.hs
     1 +{-# LANGUAGE QuasiQuotes #-}
     2 + 
     3 +-- | Test text search functions in BigQuery
     4 +module Test.Databases.BigQuery.Queries.TextFunctionsSpec (spec) where
     5 + 
     6 +import Data.Aeson (Value)
     7 +import Data.List.NonEmpty qualified as NE
     8 +import Harness.Backend.BigQuery qualified as BigQuery
     9 +import Harness.GraphqlEngine (postGraphql)
     10 +import Harness.Quoter.Graphql (graphql)
     11 +import Harness.Quoter.Yaml (interpolateYaml)
     12 +import Harness.Schema qualified as Schema
     13 +import Harness.Test.Fixture qualified as Fixture
     14 +import Harness.TestEnvironment (GlobalTestEnvironment, TestEnvironment (..))
     15 +import Harness.Yaml (shouldReturnYaml)
     16 +import Hasura.Prelude
     17 +import Test.Hspec (SpecWith, describe, it)
     18 + 
     19 +spec :: SpecWith GlobalTestEnvironment
     20 +spec =
     21 + Fixture.run
     22 + ( NE.fromList
     23 + [ (Fixture.fixture $ Fixture.Backend BigQuery.backendTypeMetadata)
     24 + { Fixture.setupTeardown = \(testEnvironment, _) ->
     25 + [ BigQuery.setupTablesAction schema testEnvironment
     26 + ]
     27 + }
     28 + ]
     29 + )
     30 + tests
     31 + 
     32 +--------------------------------------------------------------------------------
     33 +-- Schema
     34 + 
     35 +schema :: [Schema.Table]
     36 +schema =
     37 + [ (Schema.table "languages")
     38 + { Schema.tableColumns =
     39 + [ Schema.column "name" Schema.TStr
     40 + ],
     41 + Schema.tablePrimaryKey = [],
     42 + Schema.tableData =
     43 + [ [Schema.VStr "Python"],
     44 + [Schema.VStr "C"],
     45 + [Schema.VStr "C++"],
     46 + [Schema.VStr "Java"],
     47 + [Schema.VStr "C#"],
     48 + [Schema.VStr "JavaScript"],
     49 + [Schema.VStr "PHP"],
     50 + [Schema.VStr "Visual Basic"],
     51 + [Schema.VStr "SQL"],
     52 + [Schema.VStr "Scratch"],
     53 + [Schema.VStr "Go"],
     54 + [Schema.VStr "Fortran"],
     55 + [Schema.VStr "Delphi"],
     56 + [Schema.VStr "MATLAB"],
     57 + [Schema.VStr "Assembly"],
     58 + [Schema.VStr "Swift"],
     59 + [Schema.VStr "Kotlin"],
     60 + [Schema.VStr "Ruby"],
     61 + [Schema.VStr "Rust"],
     62 + [Schema.VStr "COBOL"]
     63 + ]
     64 + }
     65 + ]
     66 + 
     67 +--------------------------------------------------------------------------------
     68 +-- Tests
     69 + 
     70 +tests :: SpecWith TestEnvironment
     71 +tests = do
     72 + describe "Text predicates" do
     73 + it "ilike" \testEnvironment -> do
     74 + let schemaName :: Schema.SchemaName
     75 + schemaName = Schema.getSchemaName testEnvironment
     76 + 
     77 + let expected :: Value
     78 + expected =
     79 + [interpolateYaml|
     80 + data:
     81 + #{schemaName}_languages:
     82 + - name: Assembly
     83 + - name: Fortran
     84 + - name: Java
     85 + - name: JavaScript
     86 + - name: MATLAB
     87 + - name: Scratch
     88 + - name: Visual Basic
     89 + |]
     90 + 
     91 + actual :: IO Value
     92 + actual =
     93 + postGraphql
     94 + testEnvironment
     95 + [graphql|
     96 + query {
     97 + #{schemaName}_languages (
     98 + order_by: { name: asc },
     99 + where: { name: { _ilike: "%a%" } }
     100 + ) {
     101 + name
     102 + }
     103 + }
     104 + |]
     105 + 
     106 + shouldReturnYaml testEnvironment actual expected
     107 + 
     108 + it "like" \testEnvironment -> do
     109 + let schemaName = Schema.getSchemaName testEnvironment
     110 + let expected :: Value
     111 + expected =
     112 + [interpolateYaml|
     113 + data:
     114 + #{schemaName}_languages:
     115 + - name: Fortran
     116 + - name: Java
     117 + - name: JavaScript
     118 + - name: Scratch
     119 + - name: Visual Basic
     120 + |]
     121 + 
     122 + actual :: IO Value
     123 + actual =
     124 + postGraphql
     125 + testEnvironment
     126 + [graphql|
     127 + query {
     128 + #{schemaName}_languages (
     129 + order_by: { name: asc },
     130 + where: { name: { _like: "%a%" } }
     131 + ) {
     132 + name
     133 + }
     134 + }
     135 + |]
     136 + 
     137 + shouldReturnYaml testEnvironment actual expected
     138 + 
     139 + it "nlike" \testEnvironment -> do
     140 + let schemaName = Schema.getSchemaName testEnvironment
     141 + let expected :: Value
     142 + expected =
     143 + [interpolateYaml|
     144 + data:
     145 + #{schemaName}_languages:
     146 + - name: Assembly
     147 + - name: C
     148 + - name: C#
     149 + - name: C++
     150 + - name: COBOL
     151 + - name: Delphi
     152 + - name: Go
     153 + - name: Kotlin
     154 + - name: MATLAB
     155 + - name: PHP
     156 + - name: Python
     157 + - name: Ruby
     158 + - name: Rust
     159 + - name: SQL
     160 + - name: Swift
     161 + |]
     162 + 
     163 + actual :: IO Value
     164 + actual =
     165 + postGraphql
     166 + testEnvironment
     167 + [graphql|
     168 + query {
     169 + #{schemaName}_languages (
     170 + order_by: { name: asc },
     171 + where: { name: { _nlike: "%a%" } }
     172 + ) {
     173 + name
     174 + }
     175 + }
     176 + |]
     177 + 
     178 + shouldReturnYaml testEnvironment actual expected
     179 + 
     180 + it "nilike" \testEnvironment -> do
     181 + let schemaName = Schema.getSchemaName testEnvironment
     182 + let expected :: Value
     183 + expected =
     184 + [interpolateYaml|
     185 + data:
     186 + #{schemaName}_languages:
     187 + - name: C
     188 + - name: C#
     189 + - name: C++
     190 + - name: COBOL
     191 + - name: Delphi
     192 + - name: Go
     193 + - name: Kotlin
     194 + - name: PHP
     195 + - name: Python
     196 + - name: Ruby
     197 + - name: Rust
     198 + - name: SQL
     199 + - name: Swift
     200 + |]
     201 + 
     202 + actual :: IO Value
     203 + actual =
     204 + postGraphql
     205 + testEnvironment
     206 + [graphql|
     207 + query {
     208 + #{schemaName}_languages (
     209 + order_by: { name: asc },
     210 + where: { name: { _nilike: "%a%" } }
     211 + ) {
     212 + name
     213 + }
     214 + }
     215 + |]
     216 + 
     217 + shouldReturnYaml testEnvironment actual expected
     218 + 
  • ■ ■ ■ ■ ■
    server/lib/api-tests/src-feature-matrix/Hasura/FeatureMatrix.hs
    1 1  module Hasura.FeatureMatrix (render, parseLogs, extractFeatures, renderFeatureMatrix) where
    2 2   
    3 3  import Control.Applicative
     4 +import Control.Monad (unless, void)
    4 5  import Control.Monad.Except
    5 6  import Control.Monad.State
    6 7  import Data.Aeson
    skipped 217 lines
  • ■ ■ ■ ■
    server/lib/dc-api/test/Test/AgentClient.hs
    skipped 152 lines
    153 153   
    154 154   let phaseNamePrefix = maybe "" (<> "-") _acsPhaseName
    155 155   let filenamePrefix = printf "%s%02d" (Text.unpack phaseNamePrefix) _acsRequestCounter
    156  - let clientRequest = addHeaderRedaction _accSensitiveOutputHandling $ defaultMakeClientRequest _accBaseUrl request
     156 + clientRequest <- liftIO $ addHeaderRedaction _accSensitiveOutputHandling <$> defaultMakeClientRequest _accBaseUrl request
    157 157   
    158 158   testFolder <- getCurrentFolder
    159 159   -- HttpClient modifies the request with settings from the Manager before it sends it. To log these modifications
    skipped 37 lines
  • ■ ■ ■ ■ ■
    server/lib/dc-api/test/Test/Specs/QuerySpec/CustomOperatorsSpec.hs
    1 1  module Test.Specs.QuerySpec.CustomOperatorsSpec (spec) where
    2 2   
    3 3  import Control.Lens ((&), (?~))
    4  -import Control.Monad (forM_)
    5  -import Control.Monad.List (guard)
     4 +import Control.Monad
    6 5  import Data.HashMap.Strict qualified as HashMap
    7 6  import Data.Maybe (maybeToList)
    8 7  import Data.Text qualified as Text
    skipped 51 lines
  • ■ ■ ■ ■ ■
    server/lib/ekg-prometheus/System/Metrics/Prometheus.hs
    skipped 1092 lines
    1093 1093   gcdetails_cpu_ns = 0,
    1094 1094   gcdetails_elapsed_ns = 0,
    1095 1095   gcdetails_nonmoving_gc_sync_cpu_ns = 0,
    1096  - gcdetails_nonmoving_gc_sync_elapsed_ns = 0
     1096 + gcdetails_nonmoving_gc_sync_elapsed_ns = 0,
     1097 + gcdetails_block_fragmentation_bytes = 0
    1097 1098   }
    1098 1099   
    1099 1100  -- | The metrics registered by `registerGcMetrics`. These metrics are the
    skipped 265 lines
  • ■ ■ ■ ■
    server/lib/ekg-prometheus/ekg-prometheus.cabal
    skipped 73 lines
    74 74   base,
    75 75   ekg-prometheus,
    76 76   ekg-prometheus-benchmark,
    77  - criterion ^>= 1.5.9.0
     77 + criterion
    78 78   hs-source-dirs: benchmark-exe
    79 79   ghc-options: -O2 -threaded
    80 80   
    skipped 47 lines
  • ■ ■ ■ ■ ■
    server/lib/hasura-base/src/Hasura/Base/Instances.hs
    skipped 5 lines
    6 6  module Hasura.Base.Instances () where
    7 7   
    8 8  import Autodocodec qualified as AC
     9 +import Control.Monad.Fail
    9 10  import Control.Monad.Fix
    10 11  import Data.Aeson qualified as J
    11 12  import Data.ByteString (ByteString)
    skipped 144 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/hasura-extras/hasura-extras.cabal
    skipped 3 lines
    4 4  build-type: Simple
    5 5  copyright: Hasura Inc.
    6 6   
     7 +flag profiling
     8 + description: Configures the project to be profiling-compatible
     9 + default: False
     10 + manual: True
     11 + 
    7 12  library
    8 13   hs-source-dirs: src
    9 14   default-language: GHC2021
    skipped 30 lines
    40 45   -Wno-redundant-bang-patterns
    41 46   -Wno-unused-type-patterns
    42 47   
     48 + if !flag(profiling)
     49 + -- ghc-heap-view can't be built with profiling
     50 + build-depends: ghc-heap-view
     51 + else
     52 + cpp-options: -DPROFILING
    43 53   build-depends:
    44 54   , QuickCheck
    45 55   , aeson
    skipped 11 lines
    57 67   , data-default-class
    58 68   , deepseq
    59 69   , exceptions
    60  - , ghc-heap-view
    61 70   , graphql-parser
    62 71   , hashable
    63 72   , hasura-prelude
    skipped 32 lines
    96 105   , x509-store
    97 106   , x509-system
    98 107   , x509-validation
     108 + , vector
     109 + , extra
    99 110   
    100 111   
    101 112   default-extensions:
    skipped 50 lines
    152 163   
    153 164   System.Monitor.Heartbeat
    154 165   
     166 + Kriti.Extended
     167 + 
  • ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/Data/List/Extended.hs
    1 1  module Data.List.Extended
    2 2   ( duplicates,
    3 3   uniques,
     4 + uniquesOn,
    4 5   getDifference,
    5 6   getDifferenceOn,
    6 7   getOverlapWith,
    skipped 3 lines
    10 11   )
    11 12  where
    12 13   
    13  -import Data.Containers.ListUtils (nubOrd)
     14 +import Data.Containers.ListUtils (nubOrd, nubOrdOn)
    14 15  import Data.Function (on)
    15 16  import Data.HashMap.Strict.Extended qualified as HashMap
    16 17  import Data.HashSet qualified as Set
    skipped 12 lines
    29 30  -- [0,1,2,3,4,5,7,9]
    30 31  uniques :: (Ord a) => [a] -> [a]
    31 32  uniques = nubOrd
     33 + 
     34 +--- | Remove duplicates from a list not based on the original datatype, but
     35 +--- on a user-specified projection from that datatype.
     36 +-- >>> uniquesOn fst [("foo", 1), ("bar", 1), ("bar", 2), ("foo", 2), ("auth", 1), ("auth", 2)]
     37 +-- [("foo",1),("bar",1),("auth",1)]
     38 +uniquesOn :: (Ord b) => (a -> b) -> [a] -> [a]
     39 +uniquesOn = nubOrdOn
    32 40   
    33 41  getDifference :: (Hashable a) => [a] -> [a] -> Set.HashSet a
    34 42  getDifference = Set.difference `on` Set.fromList
    skipped 27 lines
  • ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/Data/Parser/CacheControl.hs
    skipped 17 lines
    18 18   )
    19 19  where
    20 20   
     21 +import Control.Monad
    21 22  import Data.Attoparsec.Text qualified as AT
    22 23  import Data.Bifunctor (first)
    23 24  import Data.Text qualified as T
    skipped 182 lines
  • ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/Data/Parser/JSONPath.hs
    skipped 4 lines
    5 5  where
    6 6   
    7 7  import Control.Applicative
     8 +import Control.Monad
    8 9  import Data.Aeson (Key)
    9 10  import Data.Aeson qualified as J
    10 11  import Data.Aeson.Key qualified as K
    skipped 100 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/Kriti/Extended.hs
     1 +module Kriti.Extended (fieldAccessPathTailValues) where
     2 + 
     3 +import Data.Aeson.KeyMap qualified as KeyMap
     4 +import Data.List.Extra (unsnoc)
     5 +import Data.Vector qualified as Vector
     6 +import Hasura.Prelude
     7 +import Kriti.Parser qualified as Kriti
     8 + 
     9 +-- | Extracts tail values of the given field access path.
     10 +-- Traverses the kriti template recursively to match the path with the field accessors.
     11 +-- If the path is matched, then the tail value is extracted.
     12 +fieldAccessPathTailValues :: [Text] -> Kriti.ValueExt -> [Text]
     13 +fieldAccessPathTailValues path = \case
     14 + -- Field accessors
     15 + Kriti.RequiredFieldAccess _span value tailValue -> fieldAccessTail path value tailValue
     16 + Kriti.OptionalFieldAccess _span value tailValues -> concatMap (fieldAccessTail path value) tailValues
     17 + -- We need to recurse into for the following cases
     18 + Kriti.Object _span keyMap -> concatMap (fieldAccessPathTailValues path) $ KeyMap.elems keyMap
     19 + Kriti.Array _span values -> concatMap (fieldAccessPathTailValues path) $ Vector.toList values
     20 + Kriti.StringTem _span values -> concatMap (fieldAccessPathTailValues path) $ Vector.toList values
     21 + Kriti.Iff _span value1 value2 elIfs elseValue ->
     22 + fieldAccessPathTailValues path value1
     23 + <> fieldAccessPathTailValues path value2
     24 + <> concatMap (\(Kriti.Elif _span val1 val2) -> fieldAccessPathTailValues path val1 <> fieldAccessPathTailValues path val2) elIfs
     25 + <> fieldAccessPathTailValues path elseValue
     26 + Kriti.Eq _span value1 value2 ->
     27 + fieldAccessPathTailValues path value1
     28 + <> fieldAccessPathTailValues path value2
     29 + Kriti.NotEq _span value1 value2 ->
     30 + fieldAccessPathTailValues path value1
     31 + <> fieldAccessPathTailValues path value2
     32 + Kriti.Gt _span value1 value2 ->
     33 + fieldAccessPathTailValues path value1
     34 + <> fieldAccessPathTailValues path value2
     35 + Kriti.Gte _span value1 value2 ->
     36 + fieldAccessPathTailValues path value1
     37 + <> fieldAccessPathTailValues path value2
     38 + Kriti.Lt _span value1 value2 ->
     39 + fieldAccessPathTailValues path value1
     40 + <> fieldAccessPathTailValues path value2
     41 + Kriti.Lte _span value1 value2 ->
     42 + fieldAccessPathTailValues path value1
     43 + <> fieldAccessPathTailValues path value2
     44 + Kriti.And _span value1 value2 ->
     45 + fieldAccessPathTailValues path value1
     46 + <> fieldAccessPathTailValues path value2
     47 + Kriti.Or _span value1 value2 ->
     48 + fieldAccessPathTailValues path value1
     49 + <> fieldAccessPathTailValues path value2
     50 + Kriti.In _span value1 value2 ->
     51 + fieldAccessPathTailValues path value1
     52 + <> fieldAccessPathTailValues path value2
     53 + Kriti.Defaulting _span value1 value2 ->
     54 + fieldAccessPathTailValues path value1
     55 + <> fieldAccessPathTailValues path value2
     56 + Kriti.Range _span _ _ value1 value2 ->
     57 + fieldAccessPathTailValues path value1
     58 + <> fieldAccessPathTailValues path value2
     59 + Kriti.Function _span _ value -> fieldAccessPathTailValues path value
     60 + -- We don't need to recurse into for the following cases
     61 + Kriti.String _span _ -> []
     62 + Kriti.Number _span _ -> []
     63 + Kriti.Boolean _span _ -> []
     64 + Kriti.Null _span -> []
     65 + Kriti.Var _span _varName -> []
     66 + 
     67 +-- | Match the path with the field accessors in the kriti template
     68 +matchPath :: [Text] -> Kriti.ValueExt -> Bool
     69 +matchPath path kritiValue =
     70 + case unsnoc path of
     71 + Nothing ->
     72 + -- Received an empty path
     73 + False
     74 + Just ([], firstElem) ->
     75 + -- Reached the initial end of the path
     76 + -- Now, kritiValue should be a variable that matches the firstElem
     77 + case kritiValue of
     78 + Kriti.Var _span varName -> varName == firstElem
     79 + _ -> False
     80 + Just (initPath, tail') ->
     81 + -- We need to recurse into for field access
     82 + case kritiValue of
     83 + Kriti.RequiredFieldAccess _span value tailValue ->
     84 + matchPath initPath value && ((Just tail') == coerceTailValueAsText tailValue)
     85 + _ -> False
     86 + 
     87 +-- | Get the tail of the path for a field access
     88 +--
     89 +-- fieldAccessTail ["a", "b"]
     90 +-- match case:
     91 +-- $.a.b.c -> ["c"]
     92 +-- $.a.b.[c] -> ["c"]
     93 +-- $.a.b.['c'] -> ["c"]
     94 +-- $.a.b.["c"] -> ["c"]
     95 +-- $.a.b.[<kriti_exp>] -> []
     96 +-- not match case:
     97 +-- $.c.d.[<kriti_exp>] -> fieldAccessPathTailValues ["a", "b"] kriti_exp
     98 +fieldAccessTail ::
     99 + [Text] ->
     100 + Kriti.ValueExt ->
     101 + Either Text Kriti.ValueExt ->
     102 + [Text]
     103 +fieldAccessTail path accessPath tailValue =
     104 + if matchPath path accessPath
     105 + then maybeToList $ coerceTailValueAsText tailValue
     106 + else -- If path is not matched, look for kriti expressions in the tailValue
     107 + case tailValue of
     108 + Right kritiExp -> fieldAccessPathTailValues path kritiExp
     109 + Left _ -> []
     110 + 
     111 +-- | Coerce the tail value of the path as text
     112 +--
     113 +-- path.tail_value => Left text
     114 +-- path.[tail_value] => Right (Var _ text)
     115 +-- path.['tail_value'] => Right (String _ text)
     116 +-- path.["tail_value"] => Right (StringTem _ [String _ text])
     117 +coerceTailValueAsText :: Either Text Kriti.ValueExt -> Maybe Text
     118 +coerceTailValueAsText = \case
     119 + Left t -> Just t
     120 + Right (Kriti.Var _span t) -> Just t
     121 + Right (Kriti.String _span t) -> Just t
     122 + Right (Kriti.StringTem _span templates) ->
     123 + case Vector.toList templates of
     124 + [Kriti.String _span' t] -> Just t
     125 + _ -> Nothing
     126 + Right _ -> Nothing
     127 + 
  • ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/Network/URI/Extended.hs
    skipped 4 lines
    5 5   )
    6 6  where
    7 7   
     8 +import Control.Monad
    8 9  import Data.Aeson
    9 10  import Data.Aeson.Types
    10 11  import Data.Hashable
    skipped 19 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/hasura-extras/src/System/Monitor/Heartbeat.hs
    skipped 35 lines
    36 36  import Data.IORef
    37 37  import Data.Time
    38 38  import Data.Typeable
     39 +import GHC.Conc
    39 40  import GHC.Generics
    40 41  import Options.Generic
    41 42  import System.Environment
    skipped 24 lines
    66 67  monitorHeartbeat HeartbeatOptions {..} = do
    67 68   mainThread <- myThreadId
    68 69   check <- heartbeatChecker hoSource
    69  - void $ forkIO $ while $ do
    70  - threadDelay (hoFrequencySeconds * 10 ^ (6 :: Int))
     70 + void $ forkIO $ do
     71 + labelMe "monitorHeartbeat"
     72 + while $ do
     73 + threadDelay (hoFrequencySeconds * 10 ^ (6 :: Int))
    71 74   
    72  - latestBeat <- check
    73  - now <- getCurrentTime
     75 + latestBeat <- check
     76 + now <- getCurrentTime
    74 77   
    75  - let missedBeats =
    76  - (now `diffUTCTime` latestBeat)
    77  - / secondsToNominalDiffTime (fromIntegral hoFrequencySeconds)
     78 + let missedBeats =
     79 + (now `diffUTCTime` latestBeat)
     80 + / secondsToNominalDiffTime (fromIntegral hoFrequencySeconds)
    78 81   
    79  - if (missedBeats > 2)
    80  - then do
    81  - putStrLn "Heartbeats have stopped - Exiting"
    82  - throwTo mainThread ExitSuccess
    83  - return False
    84  - else return True
     82 + if (missedBeats > 2)
     83 + then do
     84 + putStrLn "Heartbeats have stopped - Exiting"
     85 + throwTo mainThread ExitSuccess
     86 + return False
     87 + else return True
    85 88   where
    86 89   while body = do
    87 90   cond <- body
    skipped 3 lines
    91 94  heartbeatChecker StdInSource = do
    92 95   start <- getCurrentTime
    93 96   lastHeartbeat <- newIORef start
    94  - void $ forkIO $ forever $ do
    95  - hb <- getLine
    96  - case hb of
    97  - "HB" -> do
    98  - now <- getCurrentTime
    99  - writeIORef lastHeartbeat now
    100  - _ -> return ()
     97 + void $ forkIO $ do
     98 + labelMe "heartbeatChecker"
     99 + forever $ do
     100 + hb <- getLine
     101 + case hb of
     102 + "HB" -> do
     103 + now <- getCurrentTime
     104 + writeIORef lastHeartbeat now
     105 + _ -> return ()
    101 106   
    102 107   return $ readIORef lastHeartbeat
    103 108   
    skipped 55 lines
    159 164  -- thread.
    160 165  heartbeatThread :: IO () -> Int -> IO (IO ())
    161 166  heartbeatThread emitHeartbeat frequencySeconds = do
    162  - threadHandle <- Async.async $ forever $ do
    163  - emitHeartbeat
    164  - threadDelay (frequencySeconds * 10 ^ (6 :: Int))
     167 + threadHandle <- Async.async $ do
     168 + labelMe "heartbeatThread"
     169 + forever $ do
     170 + emitHeartbeat
     171 + threadDelay (frequencySeconds * 10 ^ (6 :: Int))
    165 172   return (Async.cancel threadHandle)
    166 173   
     174 +labelMe :: String -> IO ()
     175 +labelMe l = myThreadId >>= flip labelThread l
     176 + 
  • ■ ■ ■ ■ ■ ■
    server/lib/hasura-prelude/src/Hasura/Prelude.hs
    skipped 67 lines
    68 68   findWithIndex,
    69 69   alphabet,
    70 70   alphaNumerics,
     71 + labelMe,
    71 72   
    72 73   -- * Extensions to @Data.Foldable@
    73 74   module Data.Time.Clock.Units,
    skipped 6 lines
    80 81  import Control.Arrow as M (first, second, (&&&), (***), (<<<), (>>>))
    81 82  import Control.DeepSeq as M (NFData, deepseq, force)
    82 83  import Control.Lens as M (ix, (%~))
     84 +import Control.Monad as M
    83 85  import Control.Monad.Base as M
    84 86  import Control.Monad.Except as M
     87 +import Control.Monad.Fix as M
     88 +import Control.Monad.IO.Class as M
    85 89  import Control.Monad.Identity as M
    86 90  import Control.Monad.Reader as M
    87 91  import Control.Monad.State.Strict as M
    skipped 75 lines
    163 167  import Data.Word as M (Word64)
    164 168  import Debug.Trace qualified as Debug (trace, traceM)
    165 169  import GHC.Clock qualified as Clock
     170 +import GHC.Conc
    166 171  import GHC.Generics as M (Generic)
    167 172  import System.IO.Unsafe (unsafePerformIO) -- for custom trace functions
    168 173  import Text.Pretty.Simple qualified as PS
    skipped 262 lines
    431 436  alphaNumerics :: String
    432 437  alphaNumerics = alphabet ++ "0123456789"
    433 438   
     439 +-- | 'labelThread' on this thread
     440 +labelMe :: (MonadIO m) => String -> m ()
     441 +labelMe l = liftIO (myThreadId >>= flip labelThread l)
     442 + 
  • ■ ■ ■ ■ ■ ■
    server/lib/incremental/test/Hasura/IncrementalSpec.hs
    1 1  {-# LANGUAGE Arrows #-}
     2 +-- new warning in 9.6 here mentions constraints not in this file...?:
     3 +{-# OPTIONS_GHC -Wno-redundant-constraints #-}
    2 4   
    3 5  module Hasura.IncrementalSpec (spec) where
    4 6   
    skipped 79 lines
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/src/Control/Concurrent/Interrupt.hs
    skipped 12 lines
    13 13   SomeException,
    14 14   mask,
    15 15   throwIO,
    16  - throwTo,
    17 16   try,
    18 17   )
     18 +import GHC.Conc
    19 19  import Prelude
    20 20   
    21 21  -------------------------------------------------------------------------------
    skipped 12 lines
    34 34  -- provide some cancelling escape hatch.
    35 35  interruptOnAsyncException :: IO () -> IO a -> IO a
    36 36  interruptOnAsyncException interrupt action = mask $ \restore -> do
    37  - x <- async action
     37 + x <- async (labelMe "interruptOnAsyncException" >> action)
    38 38   
    39 39   -- By using 'try' with 'waitCatch', we can distinguish between asynchronous
    40 40   -- exceptions received from the outside, and those thrown by the wrapped action.
    skipped 30 lines
    71 71   Right (Right r) ->
    72 72   pure r
    73 73   
     74 +labelMe :: String -> IO ()
     75 +labelMe l = myThreadId >>= flip labelThread l
     76 + 
  • ■ ■ ■ ■ ■ ■
    server/lib/pg-client/src/Database/PG/Query/Listen.hs
    skipped 22 lines
    23 23   
    24 24  import Control.Concurrent (threadWaitRead)
    25 25  import Control.Exception.Safe (displayException, try)
     26 +import Control.Monad
    26 27  import Control.Monad.Except
     28 +import Control.Monad.IO.Class
     29 +import Control.Monad.Trans.Class
    27 30  import Data.Foldable
    28 31  import Data.String (IsString)
    29 32  import Data.Text qualified as T
    skipped 75 lines
  • ■ ■ ■ ■
    server/lib/schema-parsers/src/Hasura/GraphQL/Parser/Directives.hs
    skipped 45 lines
    46 46  import Hasura.GraphQL.Parser.Schema
    47 47  import Hasura.GraphQL.Parser.Variable
    48 48  import Language.GraphQL.Draft.Syntax qualified as G
    49  -import Type.Reflection (Typeable, typeRep)
     49 +import Type.Reflection (Typeable, typeRep, (:~:) (Refl))
    50 50  import Witherable (catMaybes)
    51 51  import Prelude
    52 52   
    skipped 244 lines
  • ■ ■ ■ ■
    server/lib/schema-parsers/src/Hasura/GraphQL/Parser/Internal/Input.hs
    skipped 16 lines
    17 17   )
    18 18  where
    19 19   
    20  -import Control.Applicative (Alternative ((<|>)), liftA2)
     20 +import Control.Applicative (Alternative ((<|>)))
    21 21  import Control.Arrow ((>>>))
    22 22  import Control.Lens hiding (enum, index)
    23 23  import Control.Monad (join, unless, (<=<), (>=>))
    skipped 470 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Backend/Citus.hs
    skipped 23 lines
    24 24  --------------------------------------------------------------------------------
    25 25   
    26 26  import Control.Concurrent.Extended (sleep)
    27  -import Control.Monad.Reader
    28 27  import Data.Aeson (Value)
    29 28  import Data.ByteString.Char8 qualified as S8
    30 29  import Data.String (fromString)
    skipped 312 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Backend/Cockroach.hs
    skipped 23 lines
    24 24  --------------------------------------------------------------------------------
    25 25   
    26 26  import Control.Concurrent.Extended (sleep)
    27  -import Control.Monad.Reader
    28 27  import Data.Aeson (Value)
    29 28  import Data.ByteString.Char8 qualified as S8
    30 29  import Data.String (fromString)
    skipped 312 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Backend/Postgres.hs
    skipped 41 lines
    42 42  --------------------------------------------------------------------------------
    43 43   
    44 44  import Control.Concurrent.Extended (sleep)
    45  -import Control.Monad.Reader
    46 45  import Data.Aeson (Value)
    47 46  import Data.Aeson qualified as J
    48 47  import Data.Monoid (Last (..))
    skipped 488 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Backend/Sqlserver.hs
    skipped 22 lines
    23 23  --------------------------------------------------------------------------------
    24 24   
    25 25  import Control.Concurrent.Extended (sleep)
    26  -import Control.Monad.Reader
    27 26  import Data.Aeson (Value)
    28 27  import Data.String (fromString)
    29 28  import Data.String.Interpolate (i)
    skipped 333 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Constants.hs
    skipped 315 lines
    316 316   soTriggersErrorLogLevelStatus = Init._default Init.triggersErrorLogLevelStatusOption,
    317 317   soAsyncActionsFetchBatchSize = Init._default Init.asyncActionsFetchBatchSizeOption,
    318 318   soPersistedQueries = Init._default Init.persistedQueriesOption,
    319  - soPersistedQueriesTtl = Init._default Init.persistedQueriesTtlOption
     319 + soPersistedQueriesTtl = Init._default Init.persistedQueriesTtlOption,
     320 + soRemoteSchemaResponsePriority = Init._default Init.remoteSchemaResponsePriorityOption,
     321 + soHeaderPrecedence = Init._default Init.configuredHeaderPrecedenceOption
    320 322   }
    321 323   
    322 324  -- | What log level should be used by the engine; this is not exported, and
    skipped 16 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Logging/Messages.hs
    skipped 161 lines
    162 162   encFailureReason :: FailureReason -> Value
    163 163   encFailureReason = \case
    164 164   NoReason -> object [("failure_reason", String "NoReason")]
     165 + ColorizedReason reason ->
     166 + object
     167 + [ ("failure_reason", String "Reason"),
     168 + ("reason", toJSON reason)
     169 + ]
    165 170   Reason reason ->
    166 171   object
    167 172   [ ("failure_reason", String "Reason"),
    skipped 209 lines
  • ■ ■ ■ ■ ■
    server/lib/test-harness/src/Harness/Quoter/Yaml.hs
    skipped 11 lines
    12 12  where
    13 13   
    14 14  import Control.Exception.Safe (Exception, impureThrow, throwM)
    15  -import Control.Monad.Identity
    16 15  import Control.Monad.Trans.Resource (ResourceT)
    17 16  import Data.Aeson qualified as J
    18 17  import Data.Conduit (runConduitRes, (.|))
    skipped 118 lines
  • ■ ■ ■ ■ ■
    server/src-lib/Control/Monad/Circular.hs
    skipped 84 lines
    85 85  import Control.Monad.Reader
    86 86  import Control.Monad.State.Strict
    87 87  import Control.Monad.Writer.Strict
    88  -import Data.HashMap.Lazy (HashMap)
    89 88  import Data.HashMap.Lazy qualified as Map
    90  -import Data.Hashable (Hashable)
    91  -import Prelude
     89 +import Hasura.Prelude
    92 90   
    93 91  -- | CircularT is implemented as a state monad containing a lazy HashMap.
    94 92  --
    skipped 64 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Control/Monad/Memoize.hs
    1 1  {-# LANGUAGE UndecidableInstances #-}
     2 +-- ghc 9.6 seems to be doing something screwy with...
     3 +{-# OPTIONS_GHC -Wno-redundant-constraints #-}
    2 4   
    3 5  module Control.Monad.Memoize
    4 6   ( MonadMemoize (..),
    skipped 4 lines
    9 11  where
    10 12   
    11 13  import Control.Monad.Except
    12  -import Control.Monad.Reader (MonadReader, ReaderT, mapReaderT)
    13  -import Control.Monad.State.Strict (MonadState (..), StateT, evalStateT)
    14 14  import Data.Dependent.Map (DMap)
    15 15  import Data.Dependent.Map qualified as DM
    16 16  import Data.Functor.Identity
    17 17  import Data.GADT.Compare.Extended
    18 18  import Data.IORef
    19 19  import Data.Kind qualified as K
     20 +import Hasura.Prelude
    20 21  import Language.Haskell.TH qualified as TH
    21 22  import System.IO.Unsafe (unsafeInterleaveIO)
    22  -import Type.Reflection (Typeable, typeRep)
    23  -import Prelude
     23 +import Type.Reflection (Typeable, typeRep, (:~:) (Refl))
    24 24   
    25 25  {- Note [Tying the knot]
    26 26  ~~~~~~~~~~~~~~~~~~~~~~~~
    skipped 147 lines
    174 174   -- the point at which the effect is performed can be unpredictable. But
    175 175   -- this action just reads, never writes, so that isn’t a concern.
    176 176   parserById <-
    177  - liftIO $
    178  - unsafeInterleaveIO $
    179  - readIORef cell >>= \case
    180  - Just parser -> pure $ Identity parser
    181  - Nothing ->
    182  - error $
    183  - unlines
    184  - [ "memoize: parser was forced before being fully constructed",
    185  - " parser constructor: " ++ TH.pprint name
    186  - ]
     177 + liftIO
     178 + $ unsafeInterleaveIO
     179 + $ readIORef cell
     180 + >>= \case
     181 + Just parser -> pure $ Identity parser
     182 + Nothing ->
     183 + error
     184 + $ unlines
     185 + [ "memoize: parser was forced before being fully constructed",
     186 + " parser constructor: " ++ TH.pprint name
     187 + ]
    187 188   put $! DM.insert parserId parserById parsersById
    188 189   
    189 190   parser <- unMemoizeT buildParser
    skipped 39 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Control/Monad/Trans/Extended.hs
     1 +-- ghc 9.6 seems to be doing something screwy with...
     2 +{-# OPTIONS_GHC -Wno-redundant-constraints #-}
     3 + 
    1 4  module Control.Monad.Trans.Extended
    2 5   ( TransT (..),
    3 6   )
    skipped 19 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Control/Monad/Trans/Managed.hs
     1 +-- ghc 9.6 seems to be doing something screwy with...
     2 +{-# OPTIONS_GHC -Wno-redundant-constraints #-}
     3 + 
    1 4  module Control.Monad.Trans.Managed
    2 5   ( ManagedT (..),
    3 6   allocate,
    skipped 77 lines
  • ■ ■ ■ ■ ■
    server/src-lib/Hasura/App/State.hs
    1 1  {-# LANGUAGE Arrows #-}
     2 +{-# OPTIONS_GHC -Wno-redundant-constraints #-}
    2 3   
    3 4  module Hasura.App.State
    4 5   ( -- * application state
    skipped 166 lines
    171 172   acAsyncActionsFetchInterval :: OptionalInterval,
    172 173   acApolloFederationStatus :: ApolloFederationStatus,
    173 174   acCloseWebsocketsOnMetadataChangeStatus :: CloseWebsocketsOnMetadataChangeStatus,
    174  - acSchemaSampledFeatureFlags :: SchemaSampledFeatureFlags
     175 + acSchemaSampledFeatureFlags :: SchemaSampledFeatureFlags,
     176 + acRemoteSchemaResponsePriority :: RemoteSchemaResponsePriority,
     177 + acHeaderPrecedence :: HeaderPrecedence
    175 178   }
    176 179   
    177 180  -- | Collection of the LoggerCtx, the regular Logger and the PGLogger
    skipped 114 lines
    292 295   acAsyncActionsFetchInterval = soAsyncActionsFetchInterval,
    293 296   acApolloFederationStatus = soApolloFederationStatus,
    294 297   acCloseWebsocketsOnMetadataChangeStatus = soCloseWebsocketsOnMetadataChangeStatus,
    295  - acSchemaSampledFeatureFlags = schemaSampledFeatureFlags
     298 + acSchemaSampledFeatureFlags = schemaSampledFeatureFlags,
     299 + acRemoteSchemaResponsePriority = soRemoteSchemaResponsePriority,
     300 + acHeaderPrecedence = soHeaderPrecedence
    296 301   }
    297 302   where
    298 303   buildEventEngineCtx = Inc.cache proc (httpPoolSize, fetchInterval, fetchBatchSize) -> do
    skipped 79 lines
  • ■ ■ ■ ■ ■
    server/src-lib/Hasura/App.hs
    skipped 811 lines
    812 812   fetchMetadataResourceVersion = runInSeparateTx fetchMetadataResourceVersionFromCatalog
    813 813   fetchMetadata = runInSeparateTx fetchMetadataAndResourceVersionFromCatalog
    814 814   fetchMetadataNotifications a b = runInSeparateTx $ fetchMetadataNotificationsFromCatalog a b
    815  - setMetadata r = runInSeparateTx . setMetadataInCatalog r
    816  - notifySchemaCacheSync a b c = runInSeparateTx $ notifySchemaCacheSyncTx a b c
     815 + 
    817 816   getCatalogState = runInSeparateTx getCatalogStateTx
    818 817   setCatalogState a b = runInSeparateTx $ setCatalogStateTx a b
     818 + 
     819 + updateMetadataAndNotifySchemaSync instanceId resourceVersion metadata cacheInvalidations =
     820 + runInSeparateTx $ do
     821 + newResourceVersion <- setMetadataInCatalog resourceVersion metadata
     822 + notifySchemaCacheSyncTx newResourceVersion instanceId cacheInvalidations
     823 + pure newResourceVersion
    819 824   
    820 825   -- stored source introspection is not available in this distribution
    821 826   fetchSourceIntrospection _ = pure $ Right Nothing
    skipped 155 lines
    977 982   setForkIOWithMetrics = Warp.setFork \f -> do
    978 983   void
    979 984   $ C.forkIOWithUnmask
    980  - ( \unmask ->
     985 + ( \unmask -> do
     986 + labelMe "runHGEServer_warp_fork"
    981 987   bracket_
    982 988   ( do
    983 989   EKG.Gauge.inc (smWarpThreads appEnvServerMetrics)
    skipped 354 lines
    1338 1344   (leActionEvents lockedEventsCtx)
    1339 1345   Nothing
    1340 1346   appEnvAsyncActionsFetchBatchSize
     1347 + (acHeaderPrecedence <$> getAppContext appStateRef)
    1341 1348   
    1342 1349   -- start a background thread to handle async action live queries
    1343 1350   void
    skipped 179 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/DDL/BoolExp.hs
    skipped 6 lines
    7 7  import Data.Aeson.Key qualified as K
    8 8  import Data.Aeson.KeyMap qualified as KM
    9 9  import Data.Text.Extended
    10  -import Hasura.Backends.BigQuery.Types (ScalarType (StringScalarType))
     10 +import Hasura.Backends.BigQuery.Types (BooleanOperators (..), ScalarType (StringScalarType))
    11 11  import Hasura.Base.Error
    12 12  import Hasura.Prelude
    13 13  import Hasura.RQL.IR.BoolExp
    skipped 36 lines
    50 50   "$gte" -> parseGte
    51 51   "_lte" -> parseLte
    52 52   "$lte" -> parseLte
     53 + "$like" -> parseLike
    53 54   "_like" -> parseLike
    54  - "$like" -> parseLike
    55  - "_nlike" -> parseNlike
    56  - "$nlike" -> parseNlike
     55 + "$nlike" -> parseNLike
     56 + "_nlike" -> parseNLike
     57 + "$ilike" -> parseILike
     58 + "_ilike" -> parseILike
     59 + "$nilike" -> parseNILike
     60 + "_nilike" -> parseNILike
    57 61   "_in" -> parseIn
    58 62   "$in" -> parseIn
    59 63   "_nin" -> parseNin
    skipped 15 lines
    75 79   parseGte = AGTE <$> parseOne
    76 80   parseLte = ALTE <$> parseOne
    77 81   parseLike = guardType StringScalarType >> ALIKE <$> parseOne
    78  - parseNlike = guardType StringScalarType >> ANLIKE <$> parseOne
     82 + parseILike = guardType StringScalarType >> ABackendSpecific . ASTILike <$> parseOne
     83 + parseNLike = guardType StringScalarType >> ANLIKE <$> parseOne
     84 + parseNILike = guardType StringScalarType >> ABackendSpecific . ASTNILike <$> parseOne
    79 85   parseIn = AIN <$> parseManyWithType colTy
    80 86   parseNin = ANIN <$> parseManyWithType colTy
    81 87   parseIsNull = bool ANISNOTNULL ANISNULL <$> decodeValue val
    skipped 10 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/FromIr.hs
    skipped 1809 lines
    1810 1810   BigQuery.ASTIntersects v -> func "ST_INTERSECTS" v
    1811 1811   BigQuery.ASTDWithin (Ir.DWithinGeogOp r v sph) ->
    1812 1812   FunctionExpression (FunctionName "ST_DWITHIN" Nothing) [expression, v, r, sph]
     1813 + BigQuery.ASTILike v ->
     1814 + OpExpression ILikeOp (FunctionExpression (FunctionName "LOWER" Nothing) [expression]) v
     1815 + BigQuery.ASTNILike v ->
     1816 + OpExpression NotILikeOp (FunctionExpression (FunctionName "LOWER" Nothing) [expression]) v
    1813 1817   
    1814 1818  nullableBoolEquality :: Expression -> Expression -> Expression
    1815 1819  nullableBoolEquality x y =
    skipped 139 lines
  • ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/Instances/Execute.hs
    skipped 37 lines
    38 38  import Hasura.RQL.Types.Common
    39 39  import Hasura.RQL.Types.Schema.Options qualified as Options
    40 40  import Hasura.SQL.AnyBackend qualified as AB
     41 +import Hasura.Server.Types (HeaderPrecedence)
    41 42  import Hasura.Session
    42 43  import Language.GraphQL.Draft.Syntax qualified as G
    43 44  import Network.HTTP.Client as HTTP
    skipped 101 lines
    145 146   [HTTP.Header] ->
    146 147   Maybe G.Name ->
    147 148   Maybe (HashMap G.Name (G.Value G.Variable)) ->
     149 + HeaderPrecedence ->
    148 150   m (DBStepInfo 'BigQuery, [ModelInfoPart])
    149  -bqDBMutationPlan _env _manager _logger _userInfo _stringifyNum _sourceName _sourceConfig _mrf _headers _gName _maybeSelSetArgs =
     151 +bqDBMutationPlan _env _manager _logger _userInfo _stringifyNum _sourceName _sourceConfig _mrf _headers _gName _maybeSelSetArgs _ =
    150 152   throw500 "mutations are not supported in BigQuery; this should be unreachable"
    151 153   
    152 154  -- explain
    skipped 116 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/Instances/Schema.hs
    skipped 302 lines
    303 303   collapseIfNull
    304 304   (C.fromAutogeneratedName Name.__nlike)
    305 305   (Just "does the column NOT match the given pattern")
    306  - (ANLIKE . IR.mkParameter <$> typedParser)
     306 + (ANLIKE . IR.mkParameter <$> typedParser),
     307 + mkBoolOperator
     308 + tCase
     309 + collapseIfNull
     310 + (C.fromAutogeneratedName Name.__ilike)
     311 + (Just "does the column match the given case-insensitive pattern")
     312 + (ABackendSpecific . BigQuery.ASTILike . IR.mkParameter <$> typedParser),
     313 + mkBoolOperator
     314 + tCase
     315 + collapseIfNull
     316 + (C.fromAutogeneratedName Name.__nilike)
     317 + (Just "does the column NOT match the given case-insensitive pattern")
     318 + (ABackendSpecific . BigQuery.ASTNILike . IR.mkParameter <$> typedParser)
    307 319   ],
    308 320   -- Ops for Bytes type
    309 321   guard (isScalarColumnWhere (== BigQuery.BytesScalarType) columnType)
    skipped 203 lines
  • ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/Instances/Types.hs
    skipped 125 lines
    126 126   type SourceConfig 'BigQuery = BigQuery.BigQuerySourceConfig
    127 127   type SourceConnConfiguration 'BigQuery = BigQuery.BigQueryConnSourceConfig
    128 128   sourceConfigNumReadReplicas = const 0 -- not supported
    129  - sourceConfigConnectonTemplateEnabled = const False -- not supported
     129 + sourceConfigConnectonTemplate = const Nothing -- not supported
    130 130   sourceSupportsColumnRedaction = const True
    131 131   sourceConfigBackendSourceKind _sourceConfig = BigQueryKind
    132 132   
  • ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/ToQuery.hs
    skipped 152 lines
    153 153   NotInOp -> "NOT IN"
    154 154   LikeOp -> "LIKE"
    155 155   NotLikeOp -> "NOT LIKE"
     156 + -- BigQuery doesn't have case-insensitive versions of this operator, but
     157 + -- that's ok: by this point, we'll have built a version of the query that
     158 + -- works case insensitively.
     159 + ILikeOp -> "LIKE"
     160 + NotILikeOp -> "NOT LIKE"
    156 161   
    157 162  fromPath :: JsonPath -> Printer
    158 163  fromPath path =
    skipped 541 lines
  • ■ ■ ■ ■ ■ ■
    server/src-lib/Hasura/Backends/BigQuery/Types.hs
    skipped 500 lines
    501 501   | NotInOp
    502 502   | LikeOp
    503 503   | NotLikeOp
    504  - -- | SNE
    505  - -- | SILIKE
    506  - -- | SNILIKE
     504 + | -- | SNE
     505 + ILikeOp
     506 + | NotILikeOp
    507 507   -- | SSIMILAR
    508 508   -- | SNSIMILAR
    509 509   -- | SGTE
    skipped 280 lines
    790 790   | ASTWithin a
    791 791   | ASTIntersects a
    792 792   | ASTDWithin (DWithinGeogOp a)
     793 + | ASTILike a
     794 + | ASTNILike a
    793 795   deriving stock (Eq, Generic, Foldable, Functor, Traversable, Show)
    794 796   
    795 797  instance (NFData a) => NFData (BooleanOperators a)
    skipped 8 lines
    804 806   ASTTouches a -> ("_st_touches", J.toJSON a)
    805 807   ASTWithin a -> ("_st_within", J.toJSON a)
    806 808   ASTDWithin a -> ("_st_dwithin", J.toJSON a)
     809 + ASTILike a -> ("_st_ilike", J.toJSON a)
     810 + ASTNILike a -> ("_st_nilike", J.toJSON a)
    807 811   
    808 812  data FunctionName = FunctionName
    809 813   { functionName :: Text,
    skipped 290 lines
  • ■ ■ ■ ■
    server/src-lib/Hasura/Backends/DataConnector/Adapter/Backend.hs
    skipped 187 lines
    188 188   type ScalarTypeParsingContext 'DataConnector = API.ScalarTypesCapabilities
    189 189   
    190 190   sourceConfigNumReadReplicas = const 0 -- not supported
    191  - sourceConfigConnectonTemplateEnabled = const False -- not supported
     191 + sourceConfigConnectonTemplate = const Nothing -- not supported
    192 192   sourceSupportsColumnRedaction DC.SourceConfig {..} =
    193 193   _scCapabilities & API._cQueries >>= API._qcRedaction & isJust
    194 194   sourceConfigBackendSourceKind DC.SourceConfig {..} = DataConnectorKind _scDataConnectorName
    skipped 40 lines
Please wait...
Page is in error, reload to recover