Compare commits

...

88 Commits

Author SHA1 Message Date
Andi
3faa64a53c
Forbid TSAN with Jemalloc (#1842) 2024-03-25 07:20:55 +00:00
Andi
581767b491
Bump googletest to 1.14 (#1845) 2024-03-23 14:45:12 +00:00
Andi
b228b431a8
Fix installation of mgcxx (#1849) 2024-03-22 17:31:01 +01:00
Antonio Filipovic
13e3a1d0f7
Add distributed locks in HA (#1819)
- Add distributed locks
- Fix the wrong MAIN state on the follower coordinator
- Fix wrong main doing failover
2024-03-22 11:34:33 +00:00
Marko Barišić
89e13109d7
Fix jepsen nodes not starting up healthy (#1846)
* add a loop to check if all nodes started correctly and restart if any failed
2024-03-21 18:39:40 +01:00
DavIvek
56be736d30
Fix and update mgbench (#1838) 2024-03-21 12:34:59 +00:00
DavIvek
a3d2474c5b
Fix timestamps saving on-disk (#1811) 2024-03-21 10:50:55 +00:00
Andi
0913e95167
Rename HA startup flags (#1820) 2024-03-21 09:12:28 +00:00
Andi
f699c0b37f
Support bolt+routing (#1796) 2024-03-21 06:41:26 +00:00
Ante Pušić
9629f10166
Text search (#1603, #1739)
Add text search:
* named property search
* all-property search
* regex search
* aggregation over search results

Text search works with:
* non-parallel transactions
* durability (WAL files and snapshots)
* multitenancy
2024-03-20 10:29:24 +01:00
Marko Barišić
2ac649f3b5
Upgrade jepsen (#1594)
* Try with jepsen v0.3.5
* Add a few WIP adjustments
* Add replication restore state on startup flag
* Fix some run.sh scripts issues
* Improve cluster commands
* Run Jepsen on debian-12 with toolchain v5
---------
Co-authored-by: Marko Budiselic <mbudiselicbuda@gmail.com>
2024-03-18 16:38:58 +01:00
Marko Barišić
ec8536e11b
Make diff run on push to master again (#1826)
* Add workflow dispatch and run on push to master
2024-03-18 11:58:34 +01:00
Marko Barišić
84fe853169
Fix cargo not found when buidling in mgbuild container (#1825)
*Add source /home/mg/.cargo/env before cmake and make commands in mgbuild.sh
2024-03-18 10:47:59 +01:00
Josipmrden
082f9a7d9b
Add behaviour of no updates if vertex is updated with same value (#1791) 2024-03-15 14:45:21 +01:00
Aidar Samerkhanov
0ed2d18754
Add RollUpApply operator support to edge type index rewrite. (#1816) 2024-03-15 11:39:37 +04:00
Gareth Andrew Lloyd
8bc8e867e4
Pmr allocator unify (#1801)
Query allocator and evaluation allocator were different.
After analysis, was determined they should be the same, this will help 
future development reduce TypeValue copies during queries.

Changes:
- Common allocator, PoolResource backed by MonotonicResource
- Optimized Pool, now O(1) alloc/dealloc as all chunks in Pool form a single 
  free list
- 2nd PoolResource, using bin sizing, not as perfect for memory usage but 
  O(1) bin selection
- Now have jemalloc's background thread to make sure decay and return 
  to OS happens
- Optimized ProperyValue to be faster at destruction/copy/move
- Less temporary memory allocations
  - CSV reader now maintains a common line buffer it reuses on line reads
  - Writing out bolt values, now reuses a values buffer
  - Evaluating an int no longer makes temporary strings for errors it most 
    likely never throws
  - ExpandVariable will reuse existing edge list in frame it one existed
2024-03-14 11:21:59 -07:00
Marko Barišić
b0cdcd3483
Run CI in mgbuilder containers (#1749)
* Update deployment files for mgbuilders because of toolchain upgrade
* Fix args parameter in builder yaml files
* Add fedora 38, 39 and rockylinux 9.3 mgbuilder Dockerfiles
* Change format of ARG TOOLCHAIN_VERSION from toolchain-vX to vX
* Add function to check supported arch, build type, os and toolchain
* Add options to init subcommand
* Add image names to mgbuilders
* Add v2 of the run.sh script
* Add testing to run2.sh
* Add option for threads --thread
* Add options for enterprise license and organization name
* Make stop mgbuild container step run always
* Add --ci flag to init script
* Move init conditionals under build-memgraph flags
* Add --community flag to build-memgraph
* Change target dir inside mgbuild container
* Add node fix to debian 11, ubuntu 20.04 and ubuntu 22.04
* rm memgraph repo after installing deps
* Add mg user in Dockerfile
* Add step to install rust on all OSs
* Chown files copied into mgbuild container
* Add e2e tests
* Add jepsen test
* Bugfix: Using reference in a callback
* Bugfix: Broad target for e2e tests
* Up db info test limit
* Disable e2e streams tests
* Fix default THREADS
* Prioretize docker compose over docker-compose
* Improve selection between docker compose and docker-compose
* Install PyYAML as mg user
* Fix doxygen install for rocky linux 9.3
* Fix rocky-9.3 environment script to properly install sbcl
* Rename all rocky-9 mentions to rocky-9.3
* Add mgdeps-cache and benchgraph-api hostnames to mgbuild images
* Add logic to pull mgbuild image if missing
* Fix build errors on toolchain-v5 (#1806)
* Rename run2 script, remove run script, add small features to mgbuild.sh
* Add --no-copy flag to build-memgraph to resolve TODO
* Add timeouts to diff jobs
* Fix asio flaky clone, try mgdeps-cache first

---------

Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
Co-authored-by: Ante Pušić <ante.f.pusic@gmail.com>
Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
2024-03-14 12:19:59 +01:00
Andi
24f8a14b43
Improve registration queries in HA environment(#1809) 2024-03-13 13:04:27 +00:00
Josipmrden
2cab07429e
Add new PR template (#1798) 2024-03-13 10:09:22 +01:00
DavIvek
de2e2048ef
Support label creation via property values (#1762) 2024-03-12 12:55:40 +00:00
Gareth Andrew Lloyd
a282542666
Optimise ORDER BY, RANGE, UNWIND (#1781)
* Optimise frame change

* Optimise distinct + orderby memory usage

- dispose collections as earlier as possible
- move values rather than copy

* Better perf, ORDER BY

* Optimise RANGE and UNWIND

* ConstraintVerificationInfo only if at least one constraint

* Optimise TypeValue

* Clang-tidy fix
2024-03-12 00:26:11 +00:00
Josipmrden
462336ff78
Fix early exit for OR expression (#1738) 2024-03-11 22:44:15 +01:00
Aidar Samerkhanov
1c71d605ff
Fix PatternVisitor compilation in toolchain-v5 (#1803) 2024-03-08 19:20:40 -08:00
Antonio Filipovic
2a5388cea9
Add tests to verify log store works properly (#1794) 2024-03-08 15:16:30 +00:00
gvolfing
619b01f3f8
Implement edge type indices (#1542)
Implement edge type indices (#1542 )
2024-03-08 08:44:48 +01:00
Andi
5ca98f9543
Fix snapshot creation in RSM and forbid multiple leaders (#1788) 2024-03-07 17:40:32 +00:00
Aidar Samerkhanov
a099417c56
List Pattern Comprehension planner (#1686) 2024-03-07 18:41:02 +04:00
Antonio Filipovic
02325f8673
Fix bug prone add server to cluster behavior (#1792) 2024-03-07 11:10:33 +00:00
Katarina Supe
6f849a14df
Update cypherl transform script (#1701)
* Update cypherl transform script

* Add new script and fix typo

* Add convert to separate files script

---------

Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2024-03-07 10:04:36 +01:00
Andi
75aad72984
Improve in-memory RAFT state (#1782) 2024-03-06 09:16:46 +01:00
Antonio Filipovic
d4d4660af0
Add force sync REPLICA with MAIN (#1777) 2024-03-05 16:51:14 +00:00
Andi
1802dc93d1
Improve Raft log serialization (#1778) 2024-03-05 07:33:13 +00:00
Andi
822183b62d
Support failure of coordinators (#1728) 2024-03-04 07:24:18 +00:00
Antonio Filipovic
33caa27161
Ensure replication works on HA cluster in different scenarios (#1743) 2024-03-01 12:32:56 +01:00
Marko Barišić
f316f7db87
Add openssl to MEMGRAPH_BUILD_DEPS for amzn-2 and centos-7 (#1771) 2024-02-28 18:21:56 +01:00
Gareth Andrew Lloyd
55f224839e
Do not use UUID_STR_LEN (#1770)
Older libuuid did not have this macro, we need to publish for older
distro with older libs.
2024-02-28 17:46:03 +01:00
Antonio Filipovic
b561c61b64
HA: Add initial logic for choosing new replica (#1729) 2024-02-28 09:57:00 +00:00
DavIvek
b7de79d5a0
Fix schema.node_type_properties() and schema.rel_type_properties() (#1718) 2024-02-27 21:40:55 +00:00
Gareth Andrew Lloyd
da898be8f9
Compact Delta 80B -> 56B (#1747)
Make special structure for old_disk_key. std::optional<std::string> was
40B, which is the largest member of out action union. Replaced with 8B,
structure.

This makes largest member now vertex_edge at 24B, this means Delta is
now only 56B.

🥳🎉 Now less than a cacheline 🎊
2024-02-27 17:21:52 +00:00
Gareth Andrew Lloyd
a6fcdfd905
Make GC + snapshot, main lock friendly (#1759)
- Only IN_MEMORY_ANALYTICAL requires unique lock during snapshot
- GC in some cases will be provide with unique lock
  - This fact can be used for optimisations
  - In all other cases, optimisations should be done with alternative
    check. Not via getting a unique lock

Also:
- Faster property lookup
- Faster index iteration (better conditional branching)
2024-02-27 15:45:08 +01:00
Marko Barišić
e88c7a0aa5
Add jobs for pushing ARM packages (#1765)
* Add jobs for pushing ARM packages
2024-02-27 12:08:53 +01:00
Marko Barišić
86ff96697d
Minor update to the rc workflow (#1760)
* Increase ARM build timeout to 120 minutes

* Remove PushToS3 job and make each Package job push to S3 individually

* Expand ARM timeout to 150 minutes for added safety; revert this after release
2024-02-26 22:57:21 +01:00
andrejtonev
f4d9a3695d
Introduce multi-tenancy to SHOW REPLICAS (#1735)
---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-02-26 19:05:49 +00:00
andrejtonev
c2e9df309a
Correctly call driver v1 tests (#1630) 2024-02-26 17:28:13 +00:00
andrejtonev
82c47ee80d
GetInfo simplification (#1621)
* Removed force dir in the GetInfo functions
2024-02-26 14:55:45 +00:00
andrejtonev
6a4ef55e90
Better auth user/role handling (#1699)
* Stop auth module from creating users
* Explicit about auth policy (check if no users defined OR auth module used)
* Role supports database access definition
* Authenticate() returns user or role
* AuthChecker generates QueryUserOrRole (can be empty)
* QueryUserOrRole actually authorizes
* Add auth cache invalidation
* Better database access queries (GRANT, DENY, REVOKE DATABASE)
2024-02-22 14:00:39 +00:00
Marko Budiselić
98727e0fa0
Update operating systems (#1371) 2024-02-22 11:14:48 +01:00
Aidar Samerkhanov
9a20ac494d
In BFS expansion filter by path we should shrink path to restore state prior to expansion only if the path was changed. (#1745) 2024-02-22 05:34:08 +00:00
Marko Barišić
e302be98a2
Push successful RC builds to S3 (#1741)
* Add new workflow which calls release build workflows

* Make the workflow build packages only on RC tags

* Change artifact names to include OS name
2024-02-21 17:08:14 +01:00
Marko Budiselić
61b9bb0f59
Add toolchain-v5 compatibility Revert to C++20 (#587)
* Upgrade cppitertools, spdlog, fmt, rapidcheck
* Make compilation work on both v4 and v5 toolchains
2024-02-19 21:09:54 +01:00
Andi
7ec648b4ce
Add --experimental-enabled=high-availability (#1720) 2024-02-19 16:28:15 +00:00
Marko Budiselić
f098a9d5e3
Patch NuRaft for clang-17 compilation (#1733) 2024-02-19 14:50:37 +01:00
Josipmrden
bae3e8a6d3
Add function for property sizes (#1557)
Add function for property sizes
2024-02-19 13:56:01 +01:00
Andi
f3574012c5
Add cpp23 support (#1726) 2024-02-19 10:36:51 +00:00
Gareth Andrew Lloyd
33c400fcc1
Fixup memory e2e tests (#1715)
- Remove the e2e that did concurrent mgp_* calls on the same transaction
  (ATM this is unsupported)
- Fix up the concurrent mgp_global_alloc test to be testing it more precisely
- Reduce the memory limit on detach delete test due to recent memory
  optimizations around deltas.
- No longer throw from hook, through jemalloc C, to our C++ on other
  side. This cause mutex unlocks to not happen.
- No longer allocate error messages while inside the hook. This caused
  recursive entry back inside jamalloc which would try to relock a
  non-recursive mutex.
2024-02-16 15:35:08 +00:00
Marko Budiselić
5ac938a6c9
Remove default assignees from issue-bug template (#1730) 2024-02-16 14:41:53 +01:00
Andi
3e3224f0a2
Forbid having multiple mains in the cluster (#1727) 2024-02-16 11:41:15 +00:00
Antonio Filipovic
bfc756c092
HA: Polish flow for replicas from coordinator (#1711) 2024-02-16 10:58:01 +01:00
Marko Barišić
5f2e3f01d0
Turn e2e tests back on for release build workflows (#1725) 2024-02-15 16:20:04 +01:00
Marko Barišić
2c774ff09b
Add rules for rc workflows (#1722) 2024-02-15 15:33:14 +01:00
Andi
20b47845f0
Forbid writing to cluster-managed main on restart (#1717) 2024-02-15 14:07:04 +01:00
Andi
fb281459b9
Add support for unregistering replication instances (#1712) 2024-02-14 14:24:59 +00:00
Andi
3a7e62f72c
Forbid branching when registering replica in auto-managed cluster (#1709) 2024-02-14 08:02:51 +00:00
Gareth Andrew Lloyd
f48151576b
System replication experimental flag (#1702)
- Remove the compile time control
- Introduce the runtime control flag

New flag `--experimental-enabled=system-replication`
2024-02-13 12:57:18 +00:00
Andi
4a7c7f0898
Distributed coordinators (#1693) 2024-02-13 08:49:28 +00:00
Ivan Milinović
7688a1b068
Fix unbound variable causing crash inside subquery (#1710) 2024-02-13 01:10:03 +01:00
Antonio Filipovic
4f4a569c72
Revert replication tests (#1707) 2024-02-12 16:42:57 +01:00
Ivan Milinović
a511e63c7a
Fix memory tracker counting wrong after OOM (#1651) 2024-02-11 20:29:06 +01:00
DavIvek
0133673f1d
Add support for query params in load csv (#1653) 2024-02-09 18:26:27 +01:00
DavIvek
786cdea260
Fix go driver test (#1708) 2024-02-09 17:07:30 +01:00
Antonio Filipovic
54f78f9217
Revert e2e tests and remove flaky ones (#1703) 2024-02-09 12:55:31 +01:00
Marko Barišić
dcdbd0a19a
Fix primary urls (#1700) 2024-02-08 14:19:30 +01:00
Andi
cf80687d1d
HA: Organize Raft coordinator group (#1687) 2024-02-08 09:11:33 +00:00
Aidar Samerkhanov
2fa8e00124
Fix accumulated path evaluation in builtin algorithms. (#1642)
Fix accumulated path evaluation in DFS, BFS, WeghtedShortestPath and AllShortestPath algorithm.
2024-02-08 10:48:54 +04:00
Antonio Filipovic
c15b62a88d
HA: Disable replication from old main (#1674) 2024-02-07 11:20:47 +01:00
Gareth Andrew Lloyd
4ef6a1f9c3
Improve memory handling of Deltas (#1688)
- Reduce delta from 104B to 80B
- Hold and pass them around as in a deque
- Detect and deleted deltas within commit if safe to do so
2024-02-06 18:07:38 +01:00
andrejtonev
7ead00f23e
Adding authentication data replication (#1666)
* Add AUTH system tx deltas
* Add auth data RPC and handlers
* Support multiple system deltas in a single transaction
* Added e2e test
* Bugfix: KVStore segfault after move

---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-02-05 10:37:00 +00:00
Marko Budiselić
c46dad18fe
Add RocksDB ADR (#1659) 2024-02-03 19:21:13 +01:00
Andi
cb7b88ad92
HA: Support restart of instances (#1672) 2024-02-01 11:55:48 +01:00
Marko Barišić
b443934b68
Update release CI job timeouts (#1683)
* Set timeout for jobs in release CI to 60 minutes

* Set timout for stress test large to 12 hours
2024-01-31 19:19:05 +01:00
Andi
6ab4235cc9
Add NuRaft library (#1678) 2024-01-31 13:13:59 +01:00
Marko Barišić
a9ef28c68e
Upgrade deprecated github actions (#1673)
* upgrade actions/checkout from v3 to v4

* upgrade actions/upload-artifact from v3 to v4

* upgrade actions/download-artifact from v2 and v3 to v4

* Fix duplicate artifact names in diff.yaml

* Fix duplicate artifact names in release_debian10.yaml and release_ubuntu2004.yaml
2024-01-30 22:52:56 +01:00
Marko Barišić
79361e9205
Disable e2e tests in all workflows (#1677)
* Disable e2e tests in diff.yaml

* Disable e2e tests in release builds
2024-01-30 19:37:00 +01:00
Gareth Andrew Lloyd
97b1e67d80
Fix auth durability (#1644)
* Change auth durability to store hash algorithm
* Add Salt to SHA256
2024-01-30 18:17:05 +00:00
Andi
78a88737f8
HA: Add automatic failover (#1646)
Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
2024-01-29 15:34:00 +01:00
andrejtonev
ff44d68843
Simplify auth::Auth (#1663)
Moved various auth flags under a single config
Moved all regex logic under auth::Auth
2024-01-29 12:52:32 +00:00
Marko Barišić
f1484240a0
Make release CI predictable (#1658)
* Move stress test large to a separate workflow
* Combine Debian10 and Ubuntu20.04 stress tests into one workflow
* Remove BigMemory tag from release_debian10
* Move e2e tests to a separate job
* Move debug interation tests to a separate job
* Move release durability and stress tests to a separate job
* Move release benchamarks to separate job
* Add 90 min timeout restriction to all jobs
* Move env variables to workflow level
* Move BUILD_TYPE env var to workflow level
---------

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2024-01-26 17:01:25 +01:00
Marko Barišić
1c95c3dc59
Add step to refresh jepsen cluster before test (#1667)
refresh jepsen during diff workflow
2024-01-26 10:50:03 +00:00
650 changed files with 35467 additions and 9628 deletions

View File

@ -6,6 +6,7 @@ Checks: '*,
-altera-unroll-loops,
-android-*,
-cert-err58-cpp,
-cppcoreguidelines-avoid-do-while,
-cppcoreguidelines-avoid-c-arrays,
-cppcoreguidelines-avoid-goto,
-cppcoreguidelines-avoid-magic-numbers,
@ -60,10 +61,11 @@ Checks: '*,
-readability-implicit-bool-conversion,
-readability-magic-numbers,
-readability-named-parameter,
-readability-identifier-length,
-misc-no-recursion,
-concurrency-mt-unsafe,
-bugprone-easily-swappable-parameters'
-bugprone-easily-swappable-parameters,
-bugprone-unchecked-optional-access'
WarningsAsErrors: ''
HeaderFilterRegex: 'src/.*'
AnalyzeTemporaryDtors: false

View File

@ -3,7 +3,6 @@ name: Bug report
about: Create a report to help us improve
title: ""
labels: bug
assignees: gitbuda
---
**Memgraph version**

View File

@ -1,14 +1,28 @@
### Description
Please briefly explain the changes you made here.
Please delete either the [master < EPIC] or [master < Task] part, depending on what are your needs.
[master < Epic] PR
- [ ] Check, and update documentation if necessary
- [ ] Write E2E tests
- [ ] Compare the [benchmarking results](https://bench-graph.memgraph.com/) between the master branch and the Epic branch
- [ ] Provide the full content or a guide for the final git message
- [FINAL GIT MESSAGE]
[master < Task] PR
- [ ] Check, and update documentation if necessary
- [ ] Provide the full content or a guide for the final git message
- **[FINAL GIT MESSAGE]**
To keep docs changelog up to date, one more thing to do:
- [ ] Write a release note here, including added/changed clauses
### Documentation checklist
- [ ] Add the documentation label tag
- [ ] Add the bug / feature label tag
- [ ] Add the milestone for which this feature is intended
- If not known, set for a later milestone
- [ ] Write a release note, including added/changed clauses
- **[Release note text]**
- [ ] Link the documentation PR here
- **[Documentation PR link]**
- [ ] Tag someone from docs team in the comments

View File

@ -16,7 +16,7 @@ jobs:
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)

View File

@ -19,58 +19,92 @@ on:
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build community binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# Initialize dependencies.
./init
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --community
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
code_analysis:
name: "Code analysis"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# This is also needed if we want do to comparison against other branches
# See https://github.community/t/checkout-code-fails-when-it-runs-lerna-run-test-since-master/17920
- name: Fetch all history for all tags and branches
@ -78,11 +112,13 @@ jobs:
- name: Initialize deps
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --init-only
- name: Set base branch
if: ${{ github.event_name == 'pull_request' }}
@ -96,162 +132,230 @@ jobs:
- name: Python code analysis
run: |
CHANGED_FILES=$(git diff -U0 ${{ env.BASE_BRANCH }}... --name-only)
for file in ${CHANGED_FILES}; do
echo ${file}
if [[ ${file} == *.py ]]; then
python3 -m black --check --diff ${file}
python3 -m isort --profile black --check-only --diff ${file}
fi
done
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-analysis --base-branch "${{ env.BASE_BRANCH }}"
- name: Build combined ASAN, UBSAN and coverage binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
cd build
cmake -DTEST_COVERAGE=ON -DASAN=ON -DUBSAN=ON ..
make -j$THREADS memgraph__unit
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --coverage --asan --ubsan
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests. It is restricted to 2 threads intentionally, because higher concurrency makes the timing related tests unstable.
cd build
LSAN_OPTIONS=suppressions=$PWD/../tools/lsan.supp UBSAN_OPTIONS=halt_on_error=1 ctest -R memgraph__unit --output-on-failure -j2
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit-coverage
- name: Compute code coverage
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Compute code coverage.
cd tools/github
./coverage_convert
# Package code coverage.
cd generated
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-coverage
- name: Save code coverage
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Code analysis)"
path: tools/github/generated/code_coverage.tar.gz
- name: Run clang-tidy
run: |
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph clang-tidy --base-branch "${{ env.BASE_BRANCH }}"
# Restrict clang-tidy results only to the modified parts
git diff -U0 ${{ env.BASE_BRANCH }}... -- src | ./tools/github/clang-tidy/clang-tidy-diff.py -p 1 -j $THREADS -path build -regex ".+\.cpp" | tee ./build/clang_tidy_output.txt
# Fail if any warning is reported
! cat ./build/clang_tidy_output.txt | ./tools/github/clang-tidy/grep_error_lines.sh > /dev/null
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run leftover CTest tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run leftover CTest tests (all except unit and benchmark tests).
cd build
ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure
- name: Run drivers tests
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
tests/integration/run.sh
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run cppcheck and clang-format.
cd tools/github
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v3
with:
name: "Code coverage"
path: tools/github/cppcheck_and_clang_format.txt
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v3
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
# Initialize dependencies.
./init
- name: Run leftover CTest tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph leftover-CTest
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j$THREADS
- name: Run drivers tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers
- name: Run HA driver tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers-high-availability
- name: Run integration tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph integration
- name: Run cppcheck and clang-format
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph cppcheck-and-clang-format
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Debug build)"
path: tools/github/cppcheck_and_clang_format.txt
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph gql-behave
- name: Save quality assurance status
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status"
path: |
@ -260,14 +364,19 @@ jobs:
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# This step will be skipped because the e2e stream tests have been disabled
# We need to fix this as soon as possible
- name: Ensure Kafka and Pulsar are up
if: false
run: |
cd tests/e2e/streams/kafka
docker-compose up -d
@ -276,14 +385,17 @@ jobs:
- name: Run e2e tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph e2e
# Same as two steps prior
- name: Ensure Kafka and Pulsar are down
if: always()
if: false
run: |
cd tests/e2e/streams/kafka
docker-compose down
@ -292,188 +404,131 @@ jobs:
- name: Run stress test (plain)
run: |
cd tests/stress
source ve3/bin/activate
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-plain
- name: Run stress test (SSL)
run: |
cd tests/stress
source ve3/bin/activate
./continuous_integration --use-ssl
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-ssl
- name: Run durability test
run: |
cd tests/stress
source ve3/bin/activate
python3 durability --num-steps 5
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph durability
- name: Create enterprise DEB package
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
cd build
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
package-memgraph
# create mgconsole
# we use the -B to force the build
make -j$THREADS -B mgconsole
# Create enterprise DEB package.
mkdir output && cd output
cpack -G DEB --config ../CPackConfig.cmake
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --package
- name: Save enterprise DEB package
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
path: build/output/memgraph*.deb
path: build/output/${{ env.OS }}/memgraph*.deb
- name: Copy build logs
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --build-logs
- name: Save test data
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: always()
with:
name: "Test data"
path: |
# multiple paths could be defined
build/logs
name: "Test data(Release build)"
path: build/logs
experimental_build_ha:
name: "High availability build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v3
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
source /opt/toolchain-v4/activate
./init
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DMG_EXPERIMENTAL_HIGH_AVAILABILITY=ON ..
make -j$THREADS
- name: Run unit tests
run: |
source /opt/toolchain-v4/activate
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Run e2e tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh "Coordinator"
./run.sh "Client initiated failover"
./run.sh "Uninitialized cluster"
- name: Save test data
uses: actions/upload-artifact@v3
- name: Stop mgbuild container
if: always()
with:
name: "Test data"
path: |
# multiple paths could be defined
build/logs
experimental_build_mt:
name: "MultiTenancy replication build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v3
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build MT replication experimental binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=Release -D MG_EXPERIMENTAL_REPLICATION_MULTITENANCY=ON ..
make -j$THREADS
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Run e2e tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
# Just the replication based e2e tests
./run.sh "Replicate multitenancy"
./run.sh "Show"
./run.sh "Show while creating invalid state"
./run.sh "Delete edge replication"
./run.sh "Read-write benchmark"
./run.sh "Index replication"
./run.sh "Constraints"
- name: Save test data
uses: actions/upload-artifact@v3
if: always()
with:
name: "Test data"
path: |
# multiple paths could be defined
build/logs
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_jepsen_test:
name: "Release Jepsen Test"
runs-on: [self-hosted, Linux, X64, Debian10, JepsenControl]
#continue-on-error: true
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 80
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-12
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binarie.
cd build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make -j$THREADS memgraph
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Copy memgraph binary
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --binary
- name: Refresh Jepsen Cluster
run: |
cd tests/jepsen
./run.sh cluster-refresh
- name: Run Jepsen tests
run: |
@ -481,47 +536,69 @@ jobs:
./run.sh test-all-individually --binary ../../build/memgraph --ignore-run-stdout-logs --ignore-run-stderr-logs
- name: Save Jepsen report
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: "Jepsen Report"
path: tests/jepsen/Jepsen.tar.gz
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_benchmarks:
name: "Release benchmarks"
runs-on: [self-hosted, Linux, X64, Diff, Gen7]
runs-on: [self-hosted, Linux, X64, DockerMgBuild, Gen7]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run macro benchmarks
run: |
cd tests/macro_benchmark
./harness QuerySuite MemgraphRunner \
--groups aggregation 1000_create unwind_create dense_expand match \
--no-strict
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph macro-benchmark
- name: Get branch name (merge)
if: github.event_name != 'pull_request'
@ -535,30 +612,49 @@ jobs:
- name: Upload macro benchmark results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "macro_benchmark" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "macro_benchmark" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id ${{ github.run_id }} \
--github-run-number ${{ github.run_number }} \
--head-branch-name ${{ env.BRANCH_NAME }}
# TODO (andi) No need for path flags and for --disk-storage and --in-memory-analytical
- name: Run mgbench
run: |
cd tests/mgbench
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph mgbench
- name: Upload mgbench results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove

View File

@ -14,7 +14,7 @@ jobs:
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)

View File

@ -42,14 +42,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package amzn-2 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: amzn-2
path: build/output/amzn-2/memgraph*.rpm
@ -60,14 +60,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-7 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: centos-7
path: build/output/centos-7/memgraph*.rpm
@ -78,14 +78,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-9 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: centos-9
path: build/output/centos-9/memgraph*.rpm
@ -96,14 +96,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
@ -114,14 +114,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
@ -132,14 +132,14 @@ jobs:
timeout-minutes: 120
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debian-11-aarch64
path: build/output/debian-11-arm/memgraph*.deb
@ -150,14 +150,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 ${{ github.event.inputs.build_type }} --for-platform
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debian-11-platform
path: build/output/debian-11/memgraph*.deb
@ -168,7 +168,7 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
@ -177,7 +177,7 @@ jobs:
./run.sh package debian-11 ${{ github.event.inputs.build_type }} --for-docker
./run.sh docker
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: docker
path: build/output/docker/memgraph*.tar.gz
@ -188,14 +188,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package fedora-36 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: fedora-36
path: build/output/fedora-36/memgraph*.rpm
@ -206,14 +206,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-18.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ubuntu-18.04
path: build/output/ubuntu-18.04/memgraph*.deb
@ -224,14 +224,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-20.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ubuntu-20.04
path: build/output/ubuntu-20.04/memgraph*.deb
@ -242,14 +242,14 @@ jobs:
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04
path: build/output/ubuntu-22.04/memgraph*.deb
@ -260,14 +260,14 @@ jobs:
timeout-minutes: 120
steps:
- name: "Set up repository"
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04-arm ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/ubuntu-22.04-arm/memgraph*.deb
@ -279,7 +279,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
# name: # if name input parameter is not provided, all artifacts are downloaded
# and put in directories named after each one.

View File

@ -14,7 +14,7 @@ jobs:
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)

View File

@ -0,0 +1,208 @@
name: Release build test
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
push:
branches:
- "release/**"
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
# UTC
- cron: "0 22 * * *"
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
Debian10:
uses: ./.github/workflows/release_debian10.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
Ubuntu20_04:
uses: ./.github/workflows/release_ubuntu2004.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
PackageDebian10:
if: github.ref_type == 'tag'
needs: [Debian10]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
PackageUbuntu20_04:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04
path: build/output/ubuntu-22.04/memgraph*.deb
PackageUbuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/ubuntu-22.04-arm/memgraph*.deb
PushToS3Ubuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [PackageUbuntu20_04_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
PackageDebian11:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
PackageDebian11_ARM:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11-aarch64
path: build/output/debian-11-arm/memgraph*.deb
PushToS3Debian11_ARM:
if: github.ref_type == 'tag'
needs: [PackageDebian11_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: debian-11-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"

View File

@ -1,6 +1,12 @@
name: Release Debian 10
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
inputs:
build_type:
@ -11,22 +17,22 @@ on:
- Release
- RelWithDebInfo
schedule:
- cron: "0 22 * * *"
env:
OS: "Debian10"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Debian10]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -40,10 +46,6 @@ jobs:
# Initialize dependencies.
./init
# Set default build_type to Release
INPUT_BUILD_TYPE=${{ github.event.inputs.build_type }}
BUILD_TYPE=${INPUT_BUILD_TYPE:-"Release"}
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DMG_ENTERPRISE=OFF ..
@ -65,10 +67,11 @@ jobs:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -110,22 +113,19 @@ jobs:
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Debian10]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -157,10 +157,6 @@ jobs:
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
tests/integration/run.sh
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
@ -171,23 +167,49 @@ jobs:
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Debian10, BigMemory]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
debug_integration_test:
name: "Debug integration tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run integration tests
run: |
tests/integration/run.sh
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -201,10 +223,6 @@ jobs:
# Initialize dependencies.
./init
# Set default build_type to Release
INPUT_BUILD_TYPE=${{ github.event.inputs.build_type }}
BUILD_TYPE=${INPUT_BUILD_TYPE:-"Release"}
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
@ -226,11 +244,60 @@ jobs:
cpack -G DEB --config ../CPackConfig.cmake
- name: Save enterprise DEB package
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS}}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
release_benchmark_tests:
name: "Release Benchmark Tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run micro benchmark tests
run: |
# Activate toolchain.
@ -257,29 +324,30 @@ jobs:
--num-database-workers 9 --num-clients-workers 30 \
--no-strict
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
release_e2e_test:
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
- name: Save quality assurance status
uses: actions/upload-artifact@v3
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
name: "GQL Behave Status"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Run unit tests
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Run unit tests.
# Build release binaries
cd build
ctest -R memgraph__unit --output-on-failure
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Ensure Kafka and Pulsar are up
run: |
@ -304,6 +372,32 @@ jobs:
cd ../pulsar
docker-compose down
release_durability_stress_tests:
name: "Release durability and stress tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (plain)
run: |
cd tests/stress
@ -314,11 +408,6 @@ jobs:
cd tests/stress
./continuous_integration --use-ssl
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset
- name: Run durability test (plain)
run: |
cd tests/stress
@ -334,15 +423,11 @@ jobs:
release_jepsen_test:
name: "Release Jepsen Test"
runs-on: [self-hosted, Linux, X64, Debian10, JepsenControl]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -355,23 +440,24 @@ jobs:
# Initialize dependencies.
./init
# Set default build_type to Release
INPUT_BUILD_TYPE=${{ github.event.inputs.build_type }}
BUILD_TYPE=${INPUT_BUILD_TYPE:-"Release"}
# Build only memgraph release binary.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS memgraph
- name: Refresh Jepsen Cluster
run: |
cd tests/jepsen
./run.sh cluster-refresh
- name: Run Jepsen tests
run: |
cd tests/jepsen
./run.sh test-all-individually --binary ../../build/memgraph --ignore-run-stdout-logs --ignore-run-stderr-logs
- name: Save Jepsen report
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: "Jepsen Report"
name: "Jepsen Report-${{ env.OS }}"
path: tests/jepsen/Jepsen.tar.gz

View File

@ -19,7 +19,7 @@ jobs:
DOCKER_REPOSITORY_NAME: memgraph
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2

View File

@ -20,7 +20,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2

View File

@ -1,6 +1,12 @@
name: Release Ubuntu 20.04
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
inputs:
build_type:
@ -11,22 +17,22 @@ on:
- Release
- RelWithDebInfo
schedule:
- cron: "0 22 * * *"
env:
OS: "Ubuntu 20.04"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -40,10 +46,6 @@ jobs:
# Initialize dependencies.
./init
# Set default build_type to Release
INPUT_BUILD_TYPE=${{ github.event.inputs.build_type }}
BUILD_TYPE=${INPUT_BUILD_TYPE:-"Release"}
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DMG_ENTERPRISE=OFF ..
@ -61,14 +63,11 @@ jobs:
coverage_build:
name: "Coverage build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -110,22 +109,19 @@ jobs:
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -157,10 +153,6 @@ jobs:
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
tests/integration/run.sh
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
@ -171,23 +163,49 @@ jobs:
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
debug_integration_test:
name: "Debug integration tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run integration tests
run: |
tests/integration/run.sh
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -201,10 +219,6 @@ jobs:
# Initialize dependencies.
./init
# Set default build_type to Release
INPUT_BUILD_TYPE=${{ github.event.inputs.build_type }}
BUILD_TYPE=${INPUT_BUILD_TYPE:-"Release"}
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
@ -226,11 +240,60 @@ jobs:
cpack -G DEB --config ../CPackConfig.cmake
- name: Save enterprise DEB package
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS }}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
release_benchmark_tests:
name: "Release Benchmark Tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run micro benchmark tests
run: |
# Activate toolchain.
@ -257,29 +320,30 @@ jobs:
--num-database-workers 9 --num-clients-workers 30 \
--no-strict
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
release_e2e_test:
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
- name: Save quality assurance status
uses: actions/upload-artifact@v3
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
name: "GQL Behave Status"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Run unit tests
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Run unit tests.
# Build release binaries
cd build
ctest -R memgraph__unit --output-on-failure
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Ensure Kafka and Pulsar are up
run: |
@ -304,6 +368,32 @@ jobs:
cd ../pulsar
docker-compose down
release_durability_stress_tests:
name: "Release durability and stress tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (plain)
run: |
cd tests/stress
@ -314,11 +404,6 @@ jobs:
cd tests/stress
./continuous_integration --use-ssl
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset
- name: Run durability test (plain)
run: |
cd tests/stress

View File

@ -0,0 +1,68 @@
name: Stress test large
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
push:
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
- cron: "0 22 * * *"
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
stress_test_large:
name: "Stress test large"
timeout-minutes: 720
strategy:
matrix:
os: [Debian10, Ubuntu20.04]
extra: [BigMemory, Gen8]
exclude:
- os: Debian10
extra: Gen8
- os: Ubuntu20.04
extra: BigMemory
runs-on: [self-hosted, Linux, X64, "${{ matrix.os }}", "${{ matrix.extra }}"]
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset

View File

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
uses: dawidd6/action-download-artifact@v4
with:
workflow: package_all.yaml
workflow_conclusion: success

38
ADRs/003_rocksdb.md Normal file
View File

@ -0,0 +1,38 @@
# RocksDB ADR
**Author**
Marko Budiselic (github.com/gitbuda)
**Status**
ACCEPTED
**Date**
January 23, 2024
**Problem**
Interacting with data (reads and writes) on disk in a concurrent, safe, and
fast way is a challenging task. Implementing all low-level primitives to
interact with various disk hardware efficiently consumes significant
engineering people. Whenever Memgraph has to store data on disk (or any
other colder than RAM storage system), the problem is how to do that in the
least amount of development time while satisfying all functional requirements
(often performance).
**Criteria**
- working efficiently in a highly concurrent environment
- easy integration with Memgraph's C++ codebase
- providing low-level key-value API
- heavily tested in production environments
- providing abstractions for the storage hardware (even for cloud-based
storages like S3)
**Decision**
There are a few robust key-value stores, but finding one that is
production-ready and compatible with Memgraph's C++ codebase is challenging.
**We select [RocksDB](https://github.com/facebook/rocksdb)** because it
delivers robust API to manage data on disk; it's battle-tested in many
production environments (many databases systems are embedding RocksDB), and
it's the most compatible one.

View File

@ -211,8 +211,13 @@ set(CMAKE_CXX_FLAGS_RELWITHDEBINFO
# ** Static linking is allowed only for executables! **
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++")
# Use lld linker to speedup build
add_link_options(-fuse-ld=lld) # TODO: use mold linker
# Use lld linker to speedup build and use less memory.
add_link_options(-fuse-ld=lld)
# NOTE: Moving to latest Clang (probably starting from 15), lld stopped to work
# without explicit link_directories call.
string(REPLACE ":" " " LD_LIBS $ENV{LD_LIBRARY_PATH})
separate_arguments(LD_LIBS)
link_directories(${LD_LIBS})
# release flags
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -DNDEBUG")
@ -271,18 +276,6 @@ endif()
set(libs_dir ${CMAKE_SOURCE_DIR}/libs)
add_subdirectory(libs EXCLUDE_FROM_ALL)
option(MG_EXPERIMENTAL_HIGH_AVAILABILITY "Feature flag for experimental high availability" OFF)
if (NOT MG_ENTERPRISE AND MG_EXPERIMENTAL_HIGH_AVAILABILITY)
set(MG_EXPERIMENTAL_HIGH_AVAILABILITY OFF)
message(FATAL_ERROR "MG_EXPERIMENTAL_HIGH_AVAILABILITY must be used with enterpise version of the code.")
endif ()
if (MG_EXPERIMENTAL_HIGH_AVAILABILITY)
add_compile_definitions(MG_EXPERIMENTAL_HIGH_AVAILABILITY)
endif ()
# Optional subproject configuration -------------------------------------------
option(TEST_COVERAGE "Generate coverage reports from running memgraph" OFF)
option(TOOLS "Build tools binaries" ON)
option(QUERY_MODULES "Build query modules containing custom procedures" ON)
@ -291,16 +284,6 @@ option(TSAN "Build with Thread Sanitizer. To get a reasonable performance option
option(UBSAN "Build with Undefined Behaviour Sanitizer" OFF)
# Build feature flags
option(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY "Feature flag for experimental replicaition of multitenacy" OFF)
if (NOT MG_ENTERPRISE AND MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
set(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY OFF)
message(FATAL_ERROR "MG_EXPERIMENTAL_REPLICATION_MULTITENANCY with community edition build isn't possible")
endif ()
if (MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
add_compile_definitions(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
endif ()
if (TEST_COVERAGE)
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
@ -317,6 +300,19 @@ endif()
option(ENABLE_JEMALLOC "Use jemalloc" ON)
option(MG_MEMORY_PROFILE "If build should be setup for memory profiling" OFF)
if (MG_MEMORY_PROFILE AND ENABLE_JEMALLOC)
message(STATUS "Jemalloc has been disabled because MG_MEMORY_PROFILE is enabled")
set(ENABLE_JEMALLOC OFF)
endif ()
if (MG_MEMORY_PROFILE AND ASAN)
message(STATUS "ASAN has been disabled because MG_MEMORY_PROFILE is enabled")
set(ASAN OFF)
endif ()
if (MG_MEMORY_PROFILE)
add_compile_definitions(MG_MEMORY_PROFILE)
endif ()
if (ASAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
@ -341,6 +337,8 @@ if (ASAN)
endif()
if (TSAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
# ThreadSanitizer generally requires all code to be compiled with -fsanitize=thread.
# If some code (e.g. dynamic libraries) is not compiled with the flag, it can
# lead to false positive race reports, false negative race reports and/or
@ -356,7 +354,7 @@ if (TSAN)
# By default ThreadSanitizer uses addr2line utility to symbolize reports.
# llvm-symbolizer is faster, consumes less memory and produces much better
# reports. To use it set runtime flag:
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# For more runtime flags see: https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
endif()

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -9,7 +7,7 @@ check_operating_system "amzn-2"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
gcc gcc-c++ make # generic build tools
git gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
@ -47,6 +45,7 @@ MEMGRAPH_BUILD_DEPS=(
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python3-pip nmap-ncat # for tests
#
@ -63,6 +62,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -45,6 +43,7 @@ MEMGRAPH_BUILD_DEPS=(
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python-virtualenv python3-pip nmap-ncat # for qa, macro_benchmark and stress tests
#
@ -63,6 +62,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -9,8 +7,10 @@ check_operating_system "centos-9"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
@ -64,6 +64,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
@ -123,7 +125,9 @@ install() {
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
# --nobest is used because of libipt because we install custom versions
# because libipt-devel is not available on CentOS 9 Stream
yum update -y --nobest
yum install -y wget git python3 python3-pip
for pkg in $1; do

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-10"
check_architecture "x86_64"

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-11"
check_architecture "arm64" "aarch64"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -61,6 +59,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

134
environment/os/debian-12-arm.sh Executable file
View File

@ -0,0 +1,134 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "arm64" "aarch64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

136
environment/os/debian-12.sh Executable file
View File

@ -0,0 +1,136 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev libipt-dev libbabeltrace-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 libipt2 libbabeltrace1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat-traditional # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "fedora-36"
check_architecture "x86_64"
@ -27,6 +27,7 @@ TOOLCHAIN_BUILD_DEPS=(
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -27,6 +25,7 @@ TOOLCHAIN_BUILD_DEPS=(
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
@ -58,6 +57,16 @@ MEMGRAPH_BUILD_DEPS=(
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}

117
environment/os/fedora-39.sh Executable file
View File

@ -0,0 +1,117 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "fedora-39"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv python3-virtualenvwrapper python3-pyyaml nmap-ncat # for tests
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang zip unzip java-11-openjdk-devel # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
if [ -v LD_LIBRARY_PATH ]; then
# On Fedora 38 yum/dnf and python11 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
LD_LIBRARY_PATH=""
fi
local missing=""
for pkg in $1; do
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
if [ -v OLD_LD_LIBRARY_PATH ]; then
echo "Restoring LD_LIBRARY_PATH..."
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
dnf update -y
for pkg in $1; do
dnf install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

212
environment/os/rocky-9.3.sh Executable file
View File

@ -0,0 +1,212 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# TODO(gitbuda): Rocky gets automatically updates -> figure out how to handle it.
check_operating_system "rocky-9.3"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel perl-Unicode-EastAsianWidth texinfo libbabeltrace-devel # for gdb
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
perl # for openssl
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv nmap-ncat # for qa, macro_benchmark and stress tests
#
# IMPORTANT: python3-yaml does NOT exist on CentOS
# Install it manually using `pip3 install PyYAML`
#
PyYAML # Package name here does not correspond to the yum package!
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang custom-golang1.18.9 # for driver tests
zip unzip java-11-openjdk-devel java-17-openjdk java-17-openjdk-devel custom-maven3.9.3 # for driver tests
cl-asdf common-lisp-controller sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "PyYAML" ]; then
if ! python3 -c "import yaml" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "python3-virtualenv" ]; then
continue
fi
if ! yum list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
yum install -y wget git python3 python3-pip
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == perl-Unicode-EastAsianWidth ]; then
if ! dnf list installed perl-Unicode-EastAsianWidth >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/p/perl-Unicode-EastAsianWidth-12.0-7.el9.noarch.rpm
fi
continue
fi
if [ "$pkg" == texinfo ]; then
if ! dnf list installed texinfo >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/t/texinfo-6.7-15.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libbabeltrace-devel ]; then
if ! dnf list installed libbabeltrace-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libbabeltrace-devel-1.5.8-10.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libipt-devel ]; then
if ! dnf list installed libipt-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libipt-devel-2.0.4-5.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == doxygen ]; then
if ! dnf list installed doxygen >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/d/doxygen-1.9.1-11.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == cl-asdf ]; then
if ! dnf list installed cl-asdf >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/cl-asdf-20101028-18.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == common-lisp-controller ]; then
if ! dnf list installed common-lisp-controller >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/common-lisp-controller-7.4-20.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == sbcl ]; then
if ! dnf list installed sbcl >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/sbcl-2.0.1-4.el8.x86_64.rpm
fi
continue
fi
if [ "$pkg" == PyYAML ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install --user PyYAML
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install --user PyYAML"
fi
continue
fi
if [ "$pkg" == python3-virtualenv ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install virtualenv
pip3 install virtualenvwrapper
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenv"
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenvwrapper"
fi
continue
fi
yum install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -5,17 +5,20 @@ IFS=' '
# NOTE: docker_image_name could be local image build based on release/package images.
# NOTE: each line has to be under quotes, docker_container_type, script_name and docker_image_name separate with a space.
# "docker_container_type script_name docker_image_name"
# docker_container_type OPTIONS:
# * mgrun -> running plain/empty operating system for the purposes of testing native memgraph package
# * mgbuild -> running the builder container to build memgraph inside it -> it's possible create builder images using release/package/run.sh
OPERATING_SYSTEMS=(
"mgrun amzn-2 amazonlinux:2"
"mgrun centos-7 centos:7"
"mgrun centos-9 dokken/centos-stream-9"
"mgrun debian-10 debian:10"
"mgrun debian-11 debian:11"
"mgrun fedora-36 fedora:36"
"mgrun ubuntu-18.04 ubuntu:18.04"
"mgrun ubuntu-20.04 ubuntu:20.04"
"mgrun ubuntu-22.04 ubuntu:22.04"
# "mgbuild centos-7 package-mgbuild_centos-7"
# "mgrun amzn-2 amazonlinux:2"
# "mgrun centos-7 centos:7"
# "mgrun centos-9 dokken/centos-stream-9"
# "mgrun debian-10 debian:10"
# "mgrun debian-11 debian:11"
# "mgrun fedora-36 fedora:36"
# "mgrun ubuntu-18.04 ubuntu:18.04"
# "mgrun ubuntu-20.04 ubuntu:20.04"
# "mgrun ubuntu-22.04 ubuntu:22.04"
# "mgbuild debian-12 memgraph/memgraph-builder:v5_debian-12"
)
if [ ! "$(docker info)" ]; then
@ -33,14 +36,24 @@ print_help () {
# NOTE: This is an idempotent operation!
# TODO(gitbuda): Consider making docker_run always delete + start a new container or add a new function.
docker_run () {
cnt_name="$1"
cnt_image="$2"
cnt_type="$1"
if [[ "$cnt_type" != "mgbuild" && "$cnt_type" != "mgrun" ]]; then
echo "ERROR: Wrong docker_container_type -> valid options are mgbuild, mgrun"
exit 1
fi
cnt_name="$2"
cnt_image="$3"
if [ ! "$(docker ps -q -f name=$cnt_name)" ]; then
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
echo "Cleanup of the old exited container..."
docker rm $cnt_name
fi
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image" sleep infinity
if [[ "$cnt_type" == "mgbuild" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image"
fi
if [[ "$cnt_type" == "mgrun" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image" sleep infinity
fi
fi
echo "The $cnt_image container is active under $cnt_name name!"
}
@ -55,9 +68,9 @@ docker_stop_and_rm () {
cnt_name="$1"
if [ "$(docker ps -q -f name=$cnt_name)" ]; then
docker stop "$1"
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
docker rm "$1"
fi
fi
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
docker rm "$1"
fi
}
@ -71,7 +84,7 @@ start_all () {
docker_name="${docker_container_type}_$script_name"
echo ""
echo "~~~~ OPERATING ON $docker_image as $docker_name..."
docker_run "$docker_name" "$docker_image"
docker_run "$docker_container_type" "$docker_name" "$docker_image"
docker_exec "$docker_name" "/memgraph/environment/os/$script_name.sh install NEW_DEPS"
echo "---- DONE EVERYHING FOR $docker_image as $docker_name..."
echo ""

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -20,6 +18,10 @@ MEMGRAPH_BUILD_DEPS=(
pkg
)
MEMGRAPH_TEST_DEPS=(
pkg
)
MEMGRAPH_RUN_DEPS=(
pkg
)

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "ubuntu-18.04"
check_architecture "x86_64"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -2,3 +2,4 @@ archives
build
output
*.tar.gz
tmp_build.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash -e
# NOTE: Copy this under memgraph/environment/toolchain/vN/tmp_build.sh, edit and test.
pushd () { command pushd "$@" > /dev/null; }
popd () { command popd "$@" > /dev/null; }
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
TOOLCHAIN_VERSION=5
NAME=toolchain-v$TOOLCHAIN_VERSION
PREFIX=/opt/$NAME
function log_tool_name () {
echo ""
echo ""
echo "#### $1 ####"
echo ""
echo ""
}
# HERE: Remove/clear dependencies from a given toolchain.
mkdir -p archives && pushd archives
# HERE: Download dependencies here.
popd
mkdir -p build
pushd build
source $PREFIX/activate
export CC=$PREFIX/bin/clang
export CXX=$PREFIX/bin/clang++
export CFLAGS="$CFLAGS -fPIC"
export PATH=$PREFIX/bin:$PATH
export LD_LIBRARY_PATH=$PREFIX/lib64
COMMON_CMAKE_FLAGS="-DCMAKE_INSTALL_PREFIX=$PREFIX
-DCMAKE_PREFIX_PATH=$PREFIX
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_C_COMPILER=$CC
-DCMAKE_CXX_COMPILER=$CXX
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_CXX_STANDARD=20
-DBUILD_TESTING=OFF
-DCMAKE_REQUIRED_INCLUDES=$PREFIX/include
-DCMAKE_POSITION_INDEPENDENT_CODE=ON"
# HERE: Add dependencies to test below.

View File

@ -307,7 +307,7 @@ if [ ! -f $PREFIX/bin/ld.gold ]; then
fi
log_tool_name "GDB $GDB_VERSION"
if [ ! -f $PREFIX/bin/gdb ]; then
if [[ ! -f "$PREFIX/bin/gdb" && "$DISTRO" -ne "amzn-2" ]]; then
if [ -d gdb-$GDB_VERSION ]; then
rm -rf gdb-$GDB_VERSION
fi
@ -671,7 +671,6 @@ PROXYGEN_SHA256=5360a8ccdfb2f5a6c7b3eed331ec7ab0e2c792d579c6fff499c85c516c11fe14
WANGLE_SHA256=1002e9c32b6f4837f6a760016e3b3e22f3509880ef3eaad191c80dc92655f23f
# WANGLE_SHA256=0e493c03572bb27fe9ca03a9da5023e52fde99c95abdcaa919bb6190e7e69532
FLEX_VERSION=2.6.4
FMT_SHA256=78b8c0a72b1c35e4443a7e308df52498252d1cefc2b08c9a97bc9ee6cfe61f8b
FMT_VERSION=10.1.1
# NOTE: spdlog depends on exact fmt versions -> UPGRADE fmt and spdlog TOGETHER.
@ -690,8 +689,8 @@ LZ4_VERSION=1.9.4
SNAPPY_SHA256=75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7
SNAPPY_VERSION=1.1.9
XZ_VERSION=5.2.5 # for LZMA
ZLIB_VERSION=1.3
ZSTD_VERSION=1.5.0
ZLIB_VERSION=1.3.1
ZSTD_VERSION=1.5.5
pushd archives
if [ ! -f boost_$BOOST_VERSION_UNDERSCORES.tar.gz ]; then
@ -700,7 +699,7 @@ if [ ! -f boost_$BOOST_VERSION_UNDERSCORES.tar.gz ]; then
wget https://boostorg.jfrog.io/artifactory/main/release/$BOOST_VERSION/source/boost_$BOOST_VERSION_UNDERSCORES.tar.gz -O boost_$BOOST_VERSION_UNDERSCORES.tar.gz
fi
if [ ! -f bzip2-$BZIP2_VERSION.tar.gz ]; then
wget https://sourceforge.net/projects/bzip2/files/bzip2-$BZIP2_VERSION.tar.gz -O bzip2-$BZIP2_VERSION.tar.gz
wget https://sourceware.org/pub/bzip2/bzip2-$BZIP2_VERSION.tar.gz -O bzip2-$BZIP2_VERSION.tar.gz
fi
if [ ! -f double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz ]; then
wget https://github.com/google/double-conversion/archive/refs/tags/v$DOUBLE_CONVERSION_VERSION.tar.gz -O double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz
@ -708,9 +707,7 @@ fi
if [ ! -f fizz-$FBLIBS_VERSION.tar.gz ]; then
wget https://github.com/facebookincubator/fizz/releases/download/v$FBLIBS_VERSION/fizz-v$FBLIBS_VERSION.tar.gz -O fizz-$FBLIBS_VERSION.tar.gz
fi
if [ ! -f flex-$FLEX_VERSION.tar.gz ]; then
wget https://github.com/westes/flex/releases/download/v$FLEX_VERSION/flex-$FLEX_VERSION.tar.gz -O flex-$FLEX_VERSION.tar.gz
fi
if [ ! -f fmt-$FMT_VERSION.tar.gz ]; then
wget https://github.com/fmtlib/fmt/archive/refs/tags/$FMT_VERSION.tar.gz -O fmt-$FMT_VERSION.tar.gz
fi
@ -765,14 +762,6 @@ echo "$BZIP2_SHA256 bzip2-$BZIP2_VERSION.tar.gz" | sha256sum -c
echo "$DOUBLE_CONVERSION_SHA256 double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz" | sha256sum -c
# verify fizz
echo "$FIZZ_SHA256 fizz-$FBLIBS_VERSION.tar.gz" | sha256sum -c
# verify flex
if [ ! -f flex-$FLEX_VERSION.tar.gz.sig ]; then
wget https://github.com/westes/flex/releases/download/v$FLEX_VERSION/flex-$FLEX_VERSION.tar.gz.sig
fi
if false; then
$GPG --keyserver $KEYSERVER --recv-keys 0xE4B29C8D64885307
$GPG --verify flex-$FLEX_VERSION.tar.gz.sig flex-$FLEX_VERSION.tar.gz
fi
# verify fmt
echo "$FMT_SHA256 fmt-$FMT_VERSION.tar.gz" | sha256sum -c
# verify spdlog
@ -1025,7 +1014,6 @@ if [ ! -d $PREFIX/include/gflags ]; then
if [ -d gflags ]; then
rm -rf gflags
fi
git clone https://github.com/memgraph/gflags.git gflags
pushd gflags
git checkout $GFLAGS_COMMIT_HASH
@ -1034,7 +1022,7 @@ if [ ! -d $PREFIX/include/gflags ]; then
cmake .. $COMMON_CMAKE_FLAGS \
-DREGISTER_INSTALL_PREFIX=OFF \
-DBUILD_gflags_nothreads_LIB=OFF \
-DGFLAGS_NO_FILENAMES=0
-DGFLAGS_NO_FILENAMES=1
make -j$CPUS install
popd && popd
fi
@ -1232,18 +1220,6 @@ if false; then
fi
fi
log_tool_name "flex $FLEX_VERSION"
if [ ! -f $PREFIX/include/FlexLexer.h ]; then
if [ -d flex-$FLEX_VERSION ]; then
rm -rf flex-$FLEX_VERSION
fi
tar -xzf ../archives/flex-$FLEX_VERSION.tar.gz
pushd flex-$FLEX_VERSION
./configure $COMMON_CONFIGURE_FLAGS
make -j$CPUS install
popd
fi
popd
# NOTE: It's important/clean (e.g., easier upload to S3) to have a separated
# folder to the output archive.

View File

@ -20,14 +20,18 @@ if [ ! -f "$INPUT" ]; then
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create consraints manually if needed${COLOR_NULL}"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/^:begin/BEGIN/g; s/^BEGIN$/BEGIN;/g;' \
-e 's/^:commit/COMMIT/g; s/^COMMIT$/COMMIT;/g;' \
-e '/^CALL/d; /^SCHEMA AWAIT/d;' \
-e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d; /^DROP CONSTRAINT/d;' "$INPUT" > "$OUTPUT"
-e '/^CREATE CONSTRAINT/d; /^DROP CONSTRAINT/d;' "$INPUT" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"

View File

@ -0,0 +1,61 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_path"
exit 1
}
if [ "$#" -ne 5 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT="$5"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT"
cat "$INPUT_NODES" >> "$OUTPUT"
cat "$INPUT_RELATIONSHIPS" >> "$OUTPUT"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"
echo ""
echo "Please import data by executing => \`cat $OUTPUT | mgconsole\`"

View File

@ -0,0 +1,64 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_schema_path output_file_nodes_path output_file_relationships_path output_file_cleanup_path"
exit 1
}
if [ "$#" -ne 8 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT_SCHEMA="$5"
OUTPUT_NODES="$6"
OUTPUT_RELATIONSHIPS="$7"
OUTPUT_CLEANUP="$8"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT_SCHEMA"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT_SCHEMA"
cat "$INPUT_NODES" > "$OUTPUT_NODES"
cat "$INPUT_RELATIONSHIPS" > "$OUTPUT_RELATIONSHIPS"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT_CLEANUP"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT_CLEANUP"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher files under $OUTPUT_SCHEMA, $OUTPUT_NODES, $OUTPUT_RELATIONSHIPS and $OUTPUT_CLEANUP"
echo ""
echo "Please import data by executing => \`cat $OUTPUT_SCHEMA | mgconsole\`, \`cat $OUTPUT_NODES | mgconsole\`, \`cat $OUTPUT_RELATIONSHIPS | mgconsole\` and \`cat $OUTPUT_CLEANUP | mgconsole\`"

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -283,7 +283,7 @@ inline mgp_list *list_all_unique_constraints(mgp_graph *graph, mgp_memory *memor
}
// mgp_graph
inline bool graph_is_transactional(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_transactional, graph); }
inline bool graph_is_mutable(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_mutable, graph); }
@ -326,6 +326,21 @@ inline mgp_vertex *graph_get_vertex_by_id(mgp_graph *g, mgp_vertex_id id, mgp_me
return MgInvoke<mgp_vertex *>(mgp_graph_get_vertex_by_id, g, id, memory);
}
inline bool graph_has_text_index(mgp_graph *graph, const char *index_name) {
return MgInvoke<int>(mgp_graph_has_text_index, graph, index_name);
}
inline mgp_map *graph_search_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
text_search_mode search_mode, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_search_text_index, graph, index_name, search_query, search_mode, memory);
}
inline mgp_map *graph_aggregate_over_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
const char *aggregation_query, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_aggregate_over_text_index, graph, index_name, search_query, aggregation_query,
memory);
}
inline mgp_vertices_iterator *graph_iter_vertices(mgp_graph *g, mgp_memory *memory) {
return MgInvoke<mgp_vertices_iterator *>(mgp_graph_iter_vertices, g, memory);
}

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -891,6 +891,36 @@ enum mgp_error mgp_edge_iter_properties(struct mgp_edge *e, struct mgp_memory *m
enum mgp_error mgp_graph_get_vertex_by_id(struct mgp_graph *g, struct mgp_vertex_id id, struct mgp_memory *memory,
struct mgp_vertex **result);
/// Result is non-zero if the index with the given name exists.
/// The current implementation always returns without errors.
enum mgp_error mgp_graph_has_text_index(struct mgp_graph *graph, const char *index_name, int *result);
/// Available modes of searching text indices.
MGP_ENUM_CLASS text_search_mode{
SPECIFIED_PROPERTIES,
REGEX,
ALL_PROPERTIES,
};
/// Search the named text index for the given query. The result is a map with the "search_results" and "error_msg" keys.
/// The "search_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "search_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_search_text_index(struct mgp_graph *graph, const char *index_name, const char *search_query,
enum text_search_mode search_mode, struct mgp_memory *memory,
struct mgp_map **result);
/// Aggregate over the results of a search over the named text index. The result is a map with the "aggregation_results"
/// and "error_msg" keys.
/// The "aggregation_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "aggregation_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_aggregate_over_text_index(struct mgp_graph *graph, const char *index_name,
const char *search_query, const char *aggregation_query,
struct mgp_memory *memory, struct mgp_map **result);
/// Creates label index for given label.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if label index already exists, result will be 0, otherwise 1.

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -32,6 +32,15 @@
namespace mgp {
class TextSearchException : public std::exception {
public:
explicit TextSearchException(std::string message) : message_(std::move(message)) {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
class IndexException : public std::exception {
public:
explicit IndexException(std::string message) : message_(std::move(message)) {}
@ -4306,12 +4315,12 @@ inline void AddParamsReturnsToProc(mgp_proc *proc, std::vector<Parameter> &param
}
} // namespace detail
inline bool CreateLabelIndex(mgp_graph *memgaph_graph, const std::string_view label) {
return create_label_index(memgaph_graph, label.data());
inline bool CreateLabelIndex(mgp_graph *memgraph_graph, const std::string_view label) {
return create_label_index(memgraph_graph, label.data());
}
inline bool DropLabelIndex(mgp_graph *memgaph_graph, const std::string_view label) {
return drop_label_index(memgaph_graph, label.data());
inline bool DropLabelIndex(mgp_graph *memgraph_graph, const std::string_view label) {
return drop_label_index(memgraph_graph, label.data());
}
inline List ListAllLabelIndices(mgp_graph *memgraph_graph) {
@ -4322,14 +4331,14 @@ inline List ListAllLabelIndices(mgp_graph *memgraph_graph) {
return List(label_indices);
}
inline bool CreateLabelPropertyIndex(mgp_graph *memgaph_graph, const std::string_view label,
inline bool CreateLabelPropertyIndex(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return create_label_property_index(memgaph_graph, label.data(), property.data());
return create_label_property_index(memgraph_graph, label.data(), property.data());
}
inline bool DropLabelPropertyIndex(mgp_graph *memgaph_graph, const std::string_view label,
inline bool DropLabelPropertyIndex(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return drop_label_property_index(memgaph_graph, label.data(), property.data());
return drop_label_property_index(memgraph_graph, label.data(), property.data());
}
inline List ListAllLabelPropertyIndices(mgp_graph *memgraph_graph) {
@ -4340,6 +4349,58 @@ inline List ListAllLabelPropertyIndices(mgp_graph *memgraph_graph) {
return List(label_property_indices);
}
namespace {
constexpr std::string_view kErrorMsgKey = "error_msg";
constexpr std::string_view kSearchResultsKey = "search_results";
constexpr std::string_view kAggregationResultsKey = "aggregation_results";
} // namespace
inline List SearchTextIndex(mgp_graph *memgraph_graph, std::string_view index_name, std::string_view search_query,
text_search_mode search_mode) {
auto results_or_error = Map(mgp::MemHandlerCallback(graph_search_text_index, memgraph_graph, index_name.data(),
search_query.data(), search_mode));
if (results_or_error.KeyExists(kErrorMsgKey)) {
if (!results_or_error.At(kErrorMsgKey).IsString()) {
throw TextSearchException{"The error message is not a string!"};
}
throw TextSearchException(results_or_error.At(kErrorMsgKey).ValueString().data());
}
if (!results_or_error.KeyExists(kSearchResultsKey)) {
throw TextSearchException{"Incomplete text index search results!"};
}
if (!results_or_error.At(kSearchResultsKey).IsList()) {
throw TextSearchException{"Text index search results have wrong type!"};
}
return results_or_error.At(kSearchResultsKey).ValueList();
}
inline std::string_view AggregateOverTextIndex(mgp_graph *memgraph_graph, std::string_view index_name,
std::string_view search_query, std::string_view aggregation_query) {
auto results_or_error =
Map(mgp::MemHandlerCallback(graph_aggregate_over_text_index, memgraph_graph, index_name.data(),
search_query.data(), aggregation_query.data()));
if (results_or_error.KeyExists(kErrorMsgKey)) {
if (!results_or_error.At(kErrorMsgKey).IsString()) {
throw TextSearchException{"The error message is not a string!"};
}
throw TextSearchException(results_or_error.At(kErrorMsgKey).ValueString().data());
}
if (!results_or_error.KeyExists(kAggregationResultsKey)) {
throw TextSearchException{"Incomplete text index aggregation results!"};
}
if (!results_or_error.At(kAggregationResultsKey).IsString()) {
throw TextSearchException{"Text index aggregation results have wrong type!"};
}
return results_or_error.At(kAggregationResultsKey).ValueString();
}
inline bool CreateExistenceConstraint(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return create_existence_constraint(memgraph_graph, label.data(), property.data());

34
init
View File

@ -14,6 +14,7 @@ function print_help () {
echo "Optional arguments:"
echo -e " -h\tdisplay this help and exit"
echo -e " --without-libs-setup\tskip the step for setting up libs"
echo -e " --ci\tscript is being run inside ci"
}
function setup_virtualenv () {
@ -35,6 +36,7 @@ function setup_virtualenv () {
}
setup_libs=true
ci=false
if [[ $# -eq 1 && "$1" == "-h" ]]; then
print_help
exit 0
@ -45,6 +47,10 @@ else
shift
setup_libs=false
;;
--ci)
shift
ci=true
;;
*)
# unknown option
echo "Invalid argument provided: $1"
@ -76,11 +82,13 @@ if [[ "$setup_libs" == "true" ]]; then
fi
# Fix for centos 7 during release
if [ "${DISTRO}" = "centos-7" ] || [ "${DISTRO}" = "debian-11" ] || [ "${DISTRO}" = "amzn-2" ]; then
if python3 -m pip show virtualenv >/dev/null 2>/dev/null; then
python3 -m pip uninstall -y virtualenv
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" = "centos-7" ] || [ "${DISTRO}" = "debian-11" ] || [ "${DISTRO}" = "amzn-2" ]; then
if python3 -m pip show virtualenv >/dev/null 2>/dev/null; then
python3 -m pip uninstall -y virtualenv
fi
python3 -m pip install virtualenv
fi
python3 -m pip install virtualenv
fi
# setup gql_behave dependencies
@ -119,14 +127,16 @@ fi
# Install precommit hook except on old operating systems because we don't
# develop on them -> pre-commit hook not required -> we can use latest
# packages.
if [ "${DISTRO}" != "centos-7" ] && [ "$DISTRO" != "debian-10" ] && [ "${DISTRO}" != "ubuntu-18.04" ] && [ "${DISTRO}" != "amzn-2" ]; then
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install py format tools for usage during the development.
echo "Install black formatter"
python3 -m pip install black==23.1.*
echo "Install isort"
python3 -m pip install isort==5.12.*
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" != "centos-7" ] && [ "$DISTRO" != "debian-10" ] && [ "${DISTRO}" != "ubuntu-18.04" ] && [ "${DISTRO}" != "amzn-2" ]; then
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install py format tools for usage during the development.
echo "Install black formatter"
python3 -m pip install black==23.1.*
echo "Install isort"
python3 -m pip install isort==5.12.*
fi
fi
# Link `include/mgp.py` with `release/mgp/mgp.py`

1
libs/.gitignore vendored
View File

@ -7,3 +7,4 @@
!pulsar.patch
!antlr4.10.1.patch
!rocksdb8.1.1.patch
!nuraft2.1.0.patch

View File

@ -16,7 +16,7 @@ set(GFLAGS_NOTHREADS OFF)
# NOTE: config/generate.py depends on the gflags help XML format.
find_package(gflags REQUIRED)
find_package(fmt 8.0.1)
find_package(fmt 8.0.1 REQUIRED)
find_package(ZLIB 1.2.11 REQUIRED)
set(LIB_DIR ${CMAKE_CURRENT_SOURCE_DIR})
@ -294,3 +294,38 @@ set_path_external_library(jemalloc STATIC
${CMAKE_CURRENT_SOURCE_DIR}/jemalloc/include/)
import_header_library(rangev3 ${CMAKE_CURRENT_SOURCE_DIR}/rangev3/include)
ExternalProject_Add(mgcxx-proj
PREFIX mgcxx-proj
GIT_REPOSITORY https://github.com/memgraph/mgcxx
GIT_TAG "v0.0.4"
CMAKE_ARGS
"-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR>"
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
"-DENABLE_TESTS=OFF"
"-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}"
"-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}"
INSTALL_DIR "${PROJECT_BINARY_DIR}/mgcxx"
)
ExternalProject_Get_Property(mgcxx-proj install_dir)
set(MGCXX_ROOT ${install_dir})
add_library(tantivy_text_search STATIC IMPORTED GLOBAL)
add_dependencies(tantivy_text_search mgcxx-proj)
set_property(TARGET tantivy_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libtantivy_text_search.a)
add_library(mgcxx_text_search STATIC IMPORTED GLOBAL)
add_dependencies(mgcxx_text_search mgcxx-proj)
set_property(TARGET mgcxx_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libmgcxx_text_search.a)
# We need to create the include directory first in order to be able to add it
# as an include directory. The header files in the include directory will be
# generated later during the build process.
file(MAKE_DIRECTORY ${MGCXX_ROOT}/include)
set_property(TARGET mgcxx_text_search PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${MGCXX_ROOT}/include)
# Setup NuRaft
import_external_library(nuraft STATIC
${CMAKE_CURRENT_SOURCE_DIR}/nuraft/lib/libnuraft.a
${CMAKE_CURRENT_SOURCE_DIR}/nuraft/include/)
find_package(OpenSSL REQUIRED)
target_link_libraries(nuraft INTERFACE ${OPENSSL_LIBRARIES})

View File

@ -5,7 +5,7 @@ index ee9b58c..31359a9 100644
@@ -48,7 +48,7 @@ option(LIBRDTSC_USE_PMU "Enables PMU usage on ARM platforms" OFF)
# | Library Build and Install Properties |
# +--------------------------------------------------------+
-add_library(rdtsc SHARED
+add_library(rdtsc
src/cycles.c
@ -14,7 +14,7 @@ index ee9b58c..31359a9 100644
@@ -72,15 +72,6 @@ target_include_directories(rdtsc
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
-# Install directory changes depending on build mode
-if (CMAKE_BUILD_TYPE MATCHES "^[Dd]ebug")
- # During debug, the library will be installed into a local directory
@ -27,3 +27,15 @@ index ee9b58c..31359a9 100644
# Specifying what to export when installing (GNUInstallDirs required)
install(TARGETS rdtsc
EXPORT librstsc-config
diff --git a/include/librdtsc/common_timer.h b/include/librdtsc/common_timer.h
index a6922d8..080dc77 100644
--- a/include/librdtsc/common_timer.h
+++ b/include/librdtsc/common_timer.h
@@ -2,6 +2,7 @@
#define LIBRDTSC_COMMON_TIMER_H
#include <librdtsc/common.h>
+#include <librdtsc/cycles.h>
extern uint64_t rdtsc_get_tsc_freq_arch();
extern uint64_t rdtsc_get_tsc_freq();

24
libs/nuraft2.1.0.patch Normal file
View File

@ -0,0 +1,24 @@
diff --git a/include/libnuraft/asio_service_options.hxx b/include/libnuraft/asio_service_options.hxx
index 8fe1ec9..9497355 100644
--- a/include/libnuraft/asio_service_options.hxx
+++ b/include/libnuraft/asio_service_options.hxx
@@ -17,6 +17,7 @@ limitations under the License.
#pragma once
+#include <cstdint>
#include <functional>
#include <string>
#include <system_error>
diff --git a/include/libnuraft/callback.hxx b/include/libnuraft/callback.hxx
index 7b71624..d48c1e2 100644
--- a/include/libnuraft/callback.hxx
+++ b/include/libnuraft/callback.hxx
@@ -18,6 +18,7 @@ limitations under the License.
#ifndef _CALLBACK_H_
#define _CALLBACK_H_
+#include <cstdint>
#include <functional>
#include <string>

View File

@ -1,21 +0,0 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6761929..6a369af 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -220,6 +220,7 @@ else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer")
endif()
endif()
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-copy -Wno-unused-but-set-variable")
endif()
include(CheckCCompilerFlag)
@@ -997,7 +998,7 @@ if(NOT WIN32 OR ROCKSDB_INSTALL_ON_WINDOWS)
if(ROCKSDB_BUILD_SHARED)
install(
- TARGETS ${ROCKSDB_SHARED_LIB}
+ TARGETS ${ROCKSDB_SHARED_LIB} OPTIONAL
EXPORT RocksDBTargets
COMPONENT runtime
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"

View File

@ -123,9 +123,11 @@ declare -A primary_urls=(
["pulsar"]="http://$local_cache_host/git/pulsar.git"
["librdtsc"]="http://$local_cache_host/git/librdtsc.git"
["ctre"]="http://$local_cache_host/file/hanickadot/compile-time-regular-expressions/v3.7.2/single-header/ctre.hpp"
["absl"]="https://$local_cache_host/git/abseil-cpp.git"
["jemalloc"]="https://$local_cache_host/git/jemalloc.git"
["range-v3"]="https://$local_cache_host/git/ericniebler/range-v3.git"
["absl"]="http://$local_cache_host/git/abseil-cpp.git"
["jemalloc"]="http://$local_cache_host/git/jemalloc.git"
["range-v3"]="http://$local_cache_host/git/range-v3.git"
["nuraft"]="http://$local_cache_host/git/NuRaft.git"
["asio"]="http://$local_cache_host/git/asio.git"
)
# The goal of secondary urls is to have links to the "source of truth" of
@ -155,6 +157,8 @@ declare -A secondary_urls=(
["absl"]="https://github.com/abseil/abseil-cpp.git"
["jemalloc"]="https://github.com/jemalloc/jemalloc.git"
["range-v3"]="https://github.com/ericniebler/range-v3.git"
["nuraft"]="https://github.com/eBay/NuRaft.git"
["asio"]="https://github.com/chriskohlhoff/asio.git"
)
# antlr
@ -166,12 +170,11 @@ pushd antlr4
git apply ../antlr4.10.1.patch
popd
# cppitertools v2.0 2019-12-23
cppitertools_ref="cb3635456bdb531121b82b4d2e3afc7ae1f56d47"
cppitertools_ref="v2.1" # 2021-01-15
repo_clone_try_double "${primary_urls[cppitertools]}" "${secondary_urls[cppitertools]}" "cppitertools" "$cppitertools_ref"
# rapidcheck
rapidcheck_tag="7bc7d302191a4f3d0bf005692677126136e02f60" # (2020-05-04)
rapidcheck_tag="1c91f40e64d87869250cfb610376c629307bf77d" # (2023-08-15)
repo_clone_try_double "${primary_urls[rapidcheck]}" "${secondary_urls[rapidcheck]}" "rapidcheck" "$rapidcheck_tag"
# google benchmark
@ -179,7 +182,7 @@ benchmark_tag="v1.6.0"
repo_clone_try_double "${primary_urls[gbenchmark]}" "${secondary_urls[gbenchmark]}" "benchmark" "$benchmark_tag" true
# google test
googletest_tag="release-1.8.0"
googletest_tag="v1.14.0"
repo_clone_try_double "${primary_urls[gtest]}" "${secondary_urls[gtest]}" "googletest" "$googletest_tag" true
# libbcrypt
@ -219,7 +222,7 @@ repo_clone_try_double "${primary_urls[pymgclient]}" "${secondary_urls[pymgclient
mgconsole_tag="v1.4.0" # (2023-05-21)
repo_clone_try_double "${primary_urls[mgconsole]}" "${secondary_urls[mgconsole]}" "mgconsole" "$mgconsole_tag" true
spdlog_tag="v1.9.2" # (2021-08-12)
spdlog_tag="v1.12.0" # (2022-11-02)
repo_clone_try_double "${primary_urls[spdlog]}" "${secondary_urls[spdlog]}" "spdlog" "$spdlog_tag" true
# librdkafka
@ -265,11 +268,13 @@ repo_clone_try_double "${primary_urls[jemalloc]}" "${secondary_urls[jemalloc]}"
pushd jemalloc
./autogen.sh
MALLOC_CONF="retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000" \
MALLOC_CONF="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000" \
./configure \
--disable-cxx \
--with-lg-page=12 \
--with-lg-hugepage=21 \
--enable-shared=no --prefix=$working_dir \
--with-malloc-conf="retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000"
--with-malloc-conf="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000"
make -j$CPUS install
popd
@ -277,3 +282,13 @@ popd
#range-v3 release-0.12.0
range_v3_ref="release-0.12.0"
repo_clone_try_double "${primary_urls[range-v3]}" "${secondary_urls[range-v3]}" "rangev3" "$range_v3_ref"
# NuRaft
nuraft_tag="v2.1.0"
repo_clone_try_double "${primary_urls[nuraft]}" "${secondary_urls[nuraft]}" "nuraft" "$nuraft_tag" true
pushd nuraft
git apply ../nuraft2.1.0.patch
asio_tag="asio-1-29-0"
repo_clone_try_double "${primary_urls[asio]}" "${secondary_urls[asio]}" "asio" "$asio_tag" true
./prepare.sh
popd

View File

@ -6,6 +6,8 @@ project(memgraph_query_modules)
disallow_in_source_build()
find_package(fmt REQUIRED)
# Everything that is installed here, should be under the "query_modules" component.
set(CMAKE_INSTALL_DEFAULT_COMPONENT_NAME "query_modules")
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
@ -58,6 +60,22 @@ install(PROGRAMS $<TARGET_FILE:schema>
# Also install the source of the example, so user can read it.
install(FILES schema.cpp DESTINATION lib/memgraph/query_modules/src)
add_library(text SHARED text_search_module.cpp)
target_include_directories(text PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(text PRIVATE -Wall)
target_link_libraries(text PRIVATE -static-libgcc -static-libstdc++ fmt::fmt)
# Strip C++ example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET text POST_BUILD
COMMAND strip -s $<TARGET_FILE:text>
COMMENT "Stripping symbols and sections from the C++ text_search module")
endif()
install(PROGRAMS $<TARGET_FILE:text>
DESTINATION lib/memgraph/query_modules
RENAME text.so)
# Also install the source of the example, so user can read it.
install(FILES text_search_module.cpp DESTINATION lib/memgraph/query_modules/src)
# Install the Python example and modules
install(FILES example.py DESTINATION lib/memgraph/query_modules RENAME py_example.py)
install(FILES graph_analyzer.py DESTINATION lib/memgraph/query_modules)

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -9,10 +9,11 @@
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <boost/functional/hash.hpp>
#include <mgp.hpp>
#include "utils/string.hpp"
#include <optional>
#include <unordered_set>
namespace Schema {
@ -37,6 +38,7 @@ constexpr std::string_view kParameterIndices = "indices";
constexpr std::string_view kParameterUniqueConstraints = "unique_constraints";
constexpr std::string_view kParameterExistenceConstraints = "existence_constraints";
constexpr std::string_view kParameterDropExisting = "drop_existing";
constexpr int kInitialNumberOfPropertyOccurances = 1;
std::string TypeOf(const mgp::Type &type);
@ -108,83 +110,79 @@ void Schema::ProcessPropertiesRel(mgp::Record &record, const std::string_view &t
record.Insert(std::string(kReturnMandatory).c_str(), mandatory);
}
struct Property {
std::string name;
mgp::Value value;
struct PropertyInfo {
std::unordered_set<std::string> property_types; // property types
int64_t number_of_property_occurrences = 0;
Property(const std::string &name, mgp::Value &&value) : name(name), value(std::move(value)) {}
PropertyInfo() = default;
explicit PropertyInfo(std::string &&property_type)
: property_types({std::move(property_type)}),
number_of_property_occurrences(Schema::kInitialNumberOfPropertyOccurances) {}
};
struct LabelsInfo {
std::unordered_map<std::string, PropertyInfo> properties; // key is a property name
int64_t number_of_label_occurrences = 0;
};
struct LabelsHash {
std::size_t operator()(const std::set<std::string> &set) const {
std::size_t seed = set.size();
for (const auto &i : set) {
seed ^= std::hash<std::string>{}(i) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
}
return seed;
}
std::size_t operator()(const std::set<std::string> &s) const { return boost::hash_range(s.begin(), s.end()); }
};
struct LabelsComparator {
bool operator()(const std::set<std::string> &lhs, const std::set<std::string> &rhs) const { return lhs == rhs; }
};
struct PropertyComparator {
bool operator()(const Property &lhs, const Property &rhs) const { return lhs.name < rhs.name; }
};
struct PropertyInfo {
std::set<Property, PropertyComparator> properties;
bool mandatory;
};
void Schema::NodeTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph, mgp_result *result,
mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
try {
std::unordered_map<std::set<std::string>, PropertyInfo, LabelsHash, LabelsComparator> node_types_properties;
std::unordered_map<std::set<std::string>, LabelsInfo, LabelsHash, LabelsComparator> node_types_properties;
for (auto node : mgp::Graph(memgraph_graph).Nodes()) {
for (const auto node : mgp::Graph(memgraph_graph).Nodes()) {
std::set<std::string> labels_set = {};
for (auto label : node.Labels()) {
for (const auto label : node.Labels()) {
labels_set.emplace(label);
}
if (node_types_properties.find(labels_set) == node_types_properties.end()) {
node_types_properties[labels_set] = PropertyInfo{std::set<Property, PropertyComparator>(), true};
}
node_types_properties[labels_set].number_of_label_occurrences++;
if (node.Properties().empty()) {
node_types_properties[labels_set].mandatory = false; // if there is node with no property, it is not mandatory
continue;
}
auto &property_info = node_types_properties.at(labels_set);
for (auto &[key, prop] : node.Properties()) {
property_info.properties.emplace(key, std::move(prop));
if (property_info.mandatory) {
property_info.mandatory =
property_info.properties.size() == 1; // if there is only one property, it is mandatory
auto &labels_info = node_types_properties.at(labels_set);
for (const auto &[key, prop] : node.Properties()) {
auto prop_type = TypeOf(prop.Type());
if (labels_info.properties.find(key) == labels_info.properties.end()) {
labels_info.properties[key] = PropertyInfo{std::move(prop_type)};
} else {
labels_info.properties[key].property_types.emplace(prop_type);
labels_info.properties[key].number_of_property_occurrences++;
}
}
}
for (auto &[labels, property_info] : node_types_properties) {
for (auto &[node_type, labels_info] : node_types_properties) { // node type is a set of labels
std::string label_type;
mgp::List labels_list = mgp::List();
for (auto const &label : labels) {
auto labels_list = mgp::List();
for (const auto &label : node_type) {
label_type += ":`" + std::string(label) + "`";
labels_list.AppendExtend(mgp::Value(label));
}
for (auto const &prop : property_info.properties) {
for (const auto &prop : labels_info.properties) {
auto prop_types = mgp::List();
for (const auto &prop_type : prop.second.property_types) {
prop_types.AppendExtend(mgp::Value(prop_type));
}
bool mandatory = prop.second.number_of_property_occurrences == labels_info.number_of_label_occurrences;
auto record = record_factory.NewRecord();
ProcessPropertiesNode(record, label_type, labels_list, prop.name, TypeOf(prop.value.Type()),
property_info.mandatory);
ProcessPropertiesNode(record, label_type, labels_list, prop.first, prop_types, mandatory);
}
if (property_info.properties.empty()) {
if (labels_info.properties.empty()) {
auto record = record_factory.NewRecord();
ProcessPropertiesNode<std::string>(record, label_type, labels_list, "", "", false);
ProcessPropertiesNode<mgp::List>(record, label_type, labels_list, "", mgp::List(), false);
}
}
@ -197,40 +195,45 @@ void Schema::NodeTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph,
void Schema::RelTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
std::unordered_map<std::string, PropertyInfo> rel_types_properties;
std::unordered_map<std::string, LabelsInfo> rel_types_properties;
const auto record_factory = mgp::RecordFactory(result);
try {
const mgp::Graph graph = mgp::Graph(memgraph_graph);
for (auto rel : graph.Relationships()) {
const auto graph = mgp::Graph(memgraph_graph);
for (const auto rel : graph.Relationships()) {
std::string rel_type = std::string(rel.Type());
if (rel_types_properties.find(rel_type) == rel_types_properties.end()) {
rel_types_properties[rel_type] = PropertyInfo{std::set<Property, PropertyComparator>(), true};
}
rel_types_properties[rel_type].number_of_label_occurrences++;
if (rel.Properties().empty()) {
rel_types_properties[rel_type].mandatory = false; // if there is rel with no property, it is not mandatory
continue;
}
auto &property_info = rel_types_properties.at(rel_type);
auto &labels_info = rel_types_properties.at(rel_type);
for (auto &[key, prop] : rel.Properties()) {
property_info.properties.emplace(key, std::move(prop));
if (property_info.mandatory) {
property_info.mandatory =
property_info.properties.size() == 1; // if there is only one property, it is mandatory
auto prop_type = TypeOf(prop.Type());
if (labels_info.properties.find(key) == labels_info.properties.end()) {
labels_info.properties[key] = PropertyInfo{std::move(prop_type)};
} else {
labels_info.properties[key].property_types.emplace(prop_type);
labels_info.properties[key].number_of_property_occurrences++;
}
}
}
for (auto &[type, property_info] : rel_types_properties) {
std::string type_str = ":`" + std::string(type) + "`";
for (auto const &prop : property_info.properties) {
for (auto &[rel_type, labels_info] : rel_types_properties) {
std::string type_str = ":`" + std::string(rel_type) + "`";
for (const auto &prop : labels_info.properties) {
auto prop_types = mgp::List();
for (const auto &prop_type : prop.second.property_types) {
prop_types.AppendExtend(mgp::Value(prop_type));
}
bool mandatory = prop.second.number_of_property_occurrences == labels_info.number_of_label_occurrences;
auto record = record_factory.NewRecord();
ProcessPropertiesRel(record, type_str, prop.name, TypeOf(prop.value.Type()), property_info.mandatory);
ProcessPropertiesRel(record, type_str, prop.first, prop_types, mandatory);
}
if (property_info.properties.empty()) {
if (labels_info.properties.empty()) {
auto record = record_factory.NewRecord();
ProcessPropertiesRel<std::string>(record, type_str, "", "", false);
ProcessPropertiesRel<mgp::List>(record, type_str, "", mgp::List(), false);
}
}

View File

@ -0,0 +1,149 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <string>
#include <string_view>
#include <fmt/format.h>
#include <mgp.hpp>
namespace TextSearch {
constexpr std::string_view kProcedureSearch = "search";
constexpr std::string_view kProcedureRegexSearch = "regex_search";
constexpr std::string_view kProcedureSearchAllProperties = "search_all";
constexpr std::string_view kProcedureAggregate = "aggregate";
constexpr std::string_view kParameterIndexName = "index_name";
constexpr std::string_view kParameterSearchQuery = "search_query";
constexpr std::string_view kParameterAggregationQuery = "aggregation_query";
constexpr std::string_view kReturnNode = "node";
constexpr std::string_view kReturnAggregation = "aggregation";
const std::string kSearchAllPrefix = "all";
void Search(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void RegexSearch(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void SearchAllProperties(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void Aggregate(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
} // namespace TextSearch
void TextSearch::Search(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
for (const auto &node :
mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::SPECIFIED_PROPERTIES)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::RegexSearch(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
for (const auto &node : mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::REGEX)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::SearchAllProperties(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result,
mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = fmt::format("{}:{}", kSearchAllPrefix, arguments[1].ValueString()).data();
for (const auto &node :
mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::ALL_PROPERTIES)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::Aggregate(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
const auto *aggregation_query = arguments[2].ValueString().data();
const auto aggregation_result =
mgp::AggregateOverTextIndex(memgraph_graph, index_name, search_query, aggregation_query);
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnAggregation.data(), aggregation_result.data());
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *memory) {
try {
mgp::MemoryDispatcherGuard guard{memory};
AddProcedure(TextSearch::Search, TextSearch::kProcedureSearch, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::RegexSearch, TextSearch::kProcedureRegexSearch, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::SearchAllProperties, TextSearch::kProcedureSearchAllProperties, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::Aggregate, TextSearch::kProcedureAggregate, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterAggregationQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnAggregation, mgp::Type::String)}, module, memory);
} catch (const std::exception &e) {
std::cerr << "Error while initializing query module: " << e.what() << std::endl;
return 1;
}
return 0;
}
extern "C" int mgp_shutdown_module() { return 0; }

View File

@ -0,0 +1,73 @@
version: "3"
services:
mgbuild_v4_amzn-2:
image: "memgraph/mgbuild:v4_amzn-2"
build:
context: amzn-2
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_amzn-2"
mgbuild_v4_centos-7:
image: "memgraph/mgbuild:v4_centos-7"
build:
context: centos-7
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_centos-7"
mgbuild_v4_centos-9:
image: "memgraph/mgbuild:v4_centos-9"
build:
context: centos-9
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_centos-9"
mgbuild_v4_debian-10:
image: "memgraph/mgbuild:v4_debian-10"
build:
context: debian-10
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-10"
mgbuild_v4_debian-11:
image: "memgraph/mgbuild:v4_debian-11"
build:
context: debian-11
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-11"
mgbuild_v4_fedora-36:
image: "memgraph/mgbuild:v4_fedora-36"
build:
context: fedora-36
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_fedora-36"
mgbuild_v4_ubuntu-18.04:
image: "memgraph/mgbuild:v4_ubuntu-18.04"
build:
context: ubuntu-18.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-18.04"
mgbuild_v4_ubuntu-20.04:
image: "memgraph/mgbuild:v4_ubuntu-20.04"
build:
context: ubuntu-20.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-20.04"
mgbuild_v4_ubuntu-22.04:
image: "memgraph/mgbuild:v4_ubuntu-22.04"
build:
context: ubuntu-22.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-22.04"

View File

@ -0,0 +1,81 @@
version: "3"
services:
mgbuild_v5_amzn-2:
image: "memgraph/mgbuild:v5_amzn-2"
build:
context: amzn-2
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_amzn-2"
mgbuild_v5_centos-7:
image: "memgraph/mgbuild:v5_centos-7"
build:
context: centos-7
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_centos-7"
mgbuild_v5_centos-9:
image: "memgraph/mgbuild:v5_centos-9"
build:
context: centos-9
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_centos-9"
mgbuild_v5_debian-11:
image: "memgraph/mgbuild:v5_debian-11"
build:
context: debian-11
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_debian-11"
mgbuild_v5_debian-12:
image: "memgraph/mgbuild:v5_debian-12"
build:
context: debian-12
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_debian-12"
mgbuild_v5_fedora-38:
image: "memgraph/mgbuild:v5_fedora-38"
build:
context: fedora-38
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_fedora-38"
mgbuild_v5_fedora-39:
image: "memgraph/mgbuild:v5_fedora-39"
build:
context: fedora-39
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_fedora-39"
mgbuild_v5_rocky-9.3:
image: "memgraph/mgbuild:v5_rocky-9.3"
build:
context: rocky-9.3
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_rocky-9.3"
mgbuild_v5_ubuntu-20.04:
image: "memgraph/mgbuild:v5_ubuntu-20.04"
build:
context: ubuntu-20.04
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_ubuntu-20.04"
mgbuild_v5_ubuntu-22.04:
image: "memgraph/mgbuild:v5_ubuntu-22.04"
build:
context: ubuntu-22.04
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_ubuntu-22.04"

View File

@ -7,9 +7,34 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz
# Download and install toolchain
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/amzn-2.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/amzn-2.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,18 @@
version: "3"
services:
mgbuild_v4_debian-11-arm:
image: "memgraph/mgbuild:v4_debian-11-arm"
build:
context: debian-11-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-11-arm"
mgbuild_v4_ubuntu_v4_22.04-arm:
image: "memgraph/mgbuild:v4_ubuntu-22.04-arm"
build:
context: ubuntu-22.04-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-22.04-arm"

View File

@ -0,0 +1,18 @@
version: "3"
services:
debian-12-arm:
image: "memgraph/mgbuild:v5_debian-12-arm"
build:
context: debian-12-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_debian-12-arm"
ubuntu-22.04-arm:
image: "memgraph/mgbuild:v5_ubuntu-22.04-arm"
build:
context: ubuntu-22.04-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_ubuntu-22.04-arm"

View File

@ -1,11 +0,0 @@
version: "3"
services:
debian-11-arm:
build:
context: debian-11-arm
container_name: "mgbuild_debian-11-arm"
ubuntu-2204-arm:
build:
context: ubuntu-22.04-arm
container_name: "mgbuild_ubuntu-22.04-arm"

View File

@ -7,9 +7,33 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/centos-7.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/centos-7.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -7,9 +7,33 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/centos-9.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/centos-9.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-10.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-10.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-11-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-11-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-11.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-11.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,39 @@
FROM debian:12
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y \
ca-certificates wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-12-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-12-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,39 @@
FROM debian:12
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y \
ca-certificates wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-12.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-12.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -1,38 +0,0 @@
version: "3"
services:
mgbuild_centos-7:
build:
context: centos-7
container_name: "mgbuild_centos-7"
mgbuild_centos-9:
build:
context: centos-9
container_name: "mgbuild_centos-9"
mgbuild_debian-10:
build:
context: debian-10
container_name: "mgbuild_debian-10"
mgbuild_debian-11:
build:
context: debian-11
container_name: "mgbuild_debian-11"
mgbuild_ubuntu-18.04:
build:
context: ubuntu-18.04
container_name: "mgbuild_ubuntu-18.04"
mgbuild_ubuntu-20.04:
build:
context: ubuntu-20.04
container_name: "mgbuild_ubuntu-20.04"
mgbuild_ubuntu-22.04:
build:
context: ubuntu-22.04
container_name: "mgbuild_ubuntu-22.04"
mgbuild_fedora-36:
build:
context: fedora-36
container_name: "mgbuild_fedora-36"
mgbuild_amzn-2:
build:
context: amzn-2
container_name: "mgbuild_amzn-2"

View File

@ -8,9 +8,30 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-36.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-36.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,37 @@
FROM fedora:38
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-38.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-38.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,37 @@
FROM fedora:39
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-39.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-39.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

669
release/package/mgbuild.sh Executable file
View File

@ -0,0 +1,669 @@
#!/bin/bash
set -Eeuo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
SCRIPT_NAME=${0##*/}
PROJECT_ROOT="$SCRIPT_DIR/../.."
MGBUILD_HOME_DIR="/home/mg"
MGBUILD_ROOT_DIR="$MGBUILD_HOME_DIR/memgraph"
DEFAULT_TOOLCHAIN="v5"
SUPPORTED_TOOLCHAINS=(
v4 v5
)
DEFAULT_OS="all"
SUPPORTED_OS=(
all
amzn-2
centos-7 centos-9
debian-10 debian-11 debian-11-arm debian-12 debian-12-arm
fedora-36 fedora-38 fedora-39
rocky-9.3
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
SUPPORTED_OS_V4=(
amzn-2
centos-7 centos-9
debian-10 debian-11 debian-11-arm
fedora-36
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
SUPPORTED_OS_V5=(
amzn-2
centos-7 centos-9
debian-11 debian-11-arm debian-12 debian-12-arm
fedora-38 fedora-39
rocky-9.3
ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
DEFAULT_BUILD_TYPE="Release"
SUPPORTED_BUILD_TYPES=(
Debug
Release
RelWithDebInfo
)
DEFAULT_ARCH="amd"
SUPPORTED_ARCHS=(
amd
arm
)
SUPPORTED_TESTS=(
clang-tidy cppcheck-and-clang-format code-analysis
code-coverage drivers drivers-high-availability durability e2e gql-behave
integration leftover-CTest macro-benchmark
mgbench stress-plain stress-ssl
unit unit-coverage upload-to-bench-graph
)
DEFAULT_THREADS=0
DEFAULT_ENTERPRISE_LICENSE=""
DEFAULT_ORGANIZATION_NAME="memgraph"
print_help () {
echo -e "\nUsage: $SCRIPT_NAME [GLOBAL OPTIONS] COMMAND [COMMAND OPTIONS]"
echo -e "\nInteract with mgbuild containers"
echo -e "\nCommands:"
echo -e " build Build mgbuild image"
echo -e " build-memgraph [OPTIONS] Build memgraph binary inside mgbuild container"
echo -e " copy OPTIONS Copy an artifact from mgbuild container to host"
echo -e " package-memgraph Create memgraph package from built binary inside mgbuild container"
echo -e " pull Pull mgbuild image from dockerhub"
echo -e " push [OPTIONS] Push mgbuild image to dockerhub"
echo -e " run [OPTIONS] Run mgbuild container"
echo -e " stop [OPTIONS] Stop mgbuild container"
echo -e " test-memgraph TEST Run a selected test TEST (see supported tests below) inside mgbuild container"
echo -e "\nSupported tests:"
echo -e " \"${SUPPORTED_TESTS[*]}\""
echo -e "\nGlobal options:"
echo -e " --arch string Specify target architecture (\"${SUPPORTED_ARCHS[*]}\") (default \"$DEFAULT_ARCH\")"
echo -e " --build-type string Specify build type (\"${SUPPORTED_BUILD_TYPES[*]}\") (default \"$DEFAULT_BUILD_TYPE\")"
echo -e " --enterprise-license string Specify the enterprise license (default \"\")"
echo -e " --organization-name string Specify the organization name (default \"memgraph\")"
echo -e " --os string Specify operating system (\"${SUPPORTED_OS[*]}\") (default \"$DEFAULT_OS\")"
echo -e " --threads int Specify the number of threads a command will use (default \"\$(nproc)\" for container)"
echo -e " --toolchain string Specify toolchain version (\"${SUPPORTED_TOOLCHAINS[*]}\") (default \"$DEFAULT_TOOLCHAIN\")"
echo -e "\nbuild-memgraph options:"
echo -e " --asan Build with ASAN"
echo -e " --community Build community version"
echo -e " --coverage Build with code coverage"
echo -e " --for-docker Add flag -DMG_TELEMETRY_ID_OVERRIDE=DOCKER to cmake"
echo -e " --for-platform Add flag -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM to cmake"
echo -e " --init-only Only run init script"
echo -e " --no-copy Don't copy the memgraph repo from host."
echo -e " Use this option with caution, be sure that memgraph source code is in correct location inside mgbuild container"
echo -e " --ubsan Build with UBSAN"
echo -e "\ncopy options:"
echo -e " --binary Copy memgraph binary from mgbuild container to host"
echo -e " --build-logs Copy build logs from mgbuild container to host"
echo -e " --package Copy memgraph package from mgbuild container to host"
echo -e "\npush options:"
echo -e " -p, --password string Specify password for docker login"
echo -e " -u, --username string Specify username for docker login"
echo -e "\nrun options:"
echo -e " --pull Pull the mgbuild image before running"
echo -e "\nstop options:"
echo -e " --remove Remove the stopped mgbuild container"
echo -e "\nToolchain v4 supported OSs:"
echo -e " \"${SUPPORTED_OS_V4[*]}\""
echo -e "\nToolchain v5 supported OSs:"
echo -e " \"${SUPPORTED_OS_V5[*]}\""
echo -e "\nExample usage:"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd run"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd --build-type RelWithDebInfo build-memgraph --community"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd --build-type RelWithDebInfo test-memgraph unit"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd package"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd copy --package"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd stop --remove"
}
check_support() {
local is_supported=false
case "$1" in
arch)
for e in "${SUPPORTED_ARCHS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Architecture $2 isn't supported!\nChoose from ${SUPPORTED_ARCHS[*]}"
exit 1
fi
;;
build_type)
for e in "${SUPPORTED_BUILD_TYPES[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Build type $2 isn't supported!\nChoose from ${SUPPORTED_BUILD_TYPES[*]}"
exit 1
fi
;;
os)
for e in "${SUPPORTED_OS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: OS $2 isn't supported!\nChoose from ${SUPPORTED_OS[*]}"
exit 1
fi
;;
toolchain)
for e in "${SUPPORTED_TOOLCHAINS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "TError: oolchain version $2 isn't supported!\nChoose from ${SUPPORTED_TOOLCHAINS[*]}"
exit 1
fi
;;
os_toolchain_combo)
if [[ "$3" == "v4" ]]; then
local SUPPORTED_OS_TOOLCHAIN=("${SUPPORTED_OS_V4[@]}")
elif [[ "$3" == "v5" ]]; then
local SUPPORTED_OS_TOOLCHAIN=("${SUPPORTED_OS_V5[@]}")
else
echo -e "Error: $3 isn't a supported toolchain_version!\nChoose from ${SUPPORTED_TOOLCHAINS[*]}"
exit 1
fi
for e in "${SUPPORTED_OS_TOOLCHAIN[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Toolchain version $3 doesn't support OS $2!\nChoose from ${SUPPORTED_OS_TOOLCHAIN[*]}"
exit 1
fi
;;
*)
echo -e "Error: This function can only check arch, build_type, os, toolchain version and os toolchain combination"
exit 1
;;
esac
}
##################################################
######## BUILD, COPY AND PACKAGE MEMGRAPH ########
##################################################
build_memgraph () {
local build_container="mgbuild_${toolchain_version}_${os}"
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_CARGO="source $MGBUILD_HOME_DIR/.cargo/env"
local container_build_dir="$MGBUILD_ROOT_DIR/build"
local container_output_dir="$container_build_dir/output"
local arm_flag=""
if [[ "$arch" == "arm" ]] || [[ "$os" =~ "-arm" ]]; then
arm_flag="-DMG_ARCH="ARM64""
fi
local build_type_flag="-DCMAKE_BUILD_TYPE=$build_type"
local telemetry_id_override_flag=""
local community_flag=""
local coverage_flag=""
local asan_flag=""
local ubsan_flag=""
local init_only=false
local for_docker=false
local for_platform=false
local copy_from_host=true
while [[ "$#" -gt 0 ]]; do
case "$1" in
--community)
community_flag="-DMG_ENTERPRISE=OFF"
shift 1
;;
--init-only)
init_only=true
shift 1
;;
--for-docker)
for_docker=true
if [[ "$for_platform" == "true" ]]; then
echo "Error: Cannot combine --for-docker and --for-platform flags"
exit 1
fi
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER "
shift 1
;;
--for-platform)
for_platform=true
if [[ "$for_docker" == "true" ]]; then
echo "Error: Cannot combine --for-docker and --for-platform flags"
exit 1
fi
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM "
shift 1
;;
--coverage)
coverage_flag="-DTEST_COVERAGE=ON"
shift 1
;;
--asan)
asan_flag="-DASAN=ON"
shift 1
;;
--ubsan)
ubsan_flag="-DUBSAN=ON"
shift 1
;;
--no-copy)
copy_from_host=false
shift 1
;;
*)
echo "Error: Unknown flag '$1'"
exit 1
;;
esac
done
echo "Initializing deps ..."
# If master is not the current branch, fetch it, because the get_version
# script depends on it. If we are on master, the fetch command is going to
# fail so that's why there is the explicit check.
# Required here because Docker build container can't access remote.
cd "$PROJECT_ROOT"
if [[ "$(git rev-parse --abbrev-ref HEAD)" != "master" ]]; then
git fetch origin master:master
fi
if [[ "$copy_from_host" == "true" ]]; then
# Ensure we have a clean build directory
docker exec -u mg "$build_container" bash -c "rm -rf $MGBUILD_ROOT_DIR && mkdir -p $MGBUILD_ROOT_DIR"
echo "Copying project files..."
docker cp "$PROJECT_ROOT/." "$build_container:$MGBUILD_ROOT_DIR/"
fi
# Change ownership of copied files so the mg user inside container can access them
docker exec -u root $build_container bash -c "chown -R mg:mg $MGBUILD_ROOT_DIR"
echo "Installing dependencies using '/memgraph/environment/os/$os.sh' script..."
docker exec -u root "$build_container" bash -c "$MGBUILD_ROOT_DIR/environment/os/$os.sh check TOOLCHAIN_RUN_DEPS || /environment/os/$os.sh install TOOLCHAIN_RUN_DEPS"
docker exec -u root "$build_container" bash -c "$MGBUILD_ROOT_DIR/environment/os/$os.sh check MEMGRAPH_BUILD_DEPS || /environment/os/$os.sh install MEMGRAPH_BUILD_DEPS"
echo "Building targeted package..."
# Fix issue with git marking directory as not safe
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && git config --global --add safe.directory '*'"
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && $ACTIVATE_TOOLCHAIN && ./init --ci"
if [[ "$init_only" == "true" ]]; then
return
fi
echo "Building Memgraph for $os on $build_container..."
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && rm -rf ./*"
# Fix cmake failing locally if remote is clone via ssh
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && git remote set-url origin https://github.com/memgraph/memgraph.git"
# Define cmake command
local cmake_cmd="cmake $build_type_flag $arm_flag $community_flag $telemetry_id_override_flag $coverage_flag $asan_flag $ubsan_flag .."
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO && $cmake_cmd"
# ' is used instead of " because we need to run make within the allowed
# container resources.
# Default value for $threads is 0 instead of $(nproc) because macos
# doesn't support the nproc command.
# 0 is set for default value and checked here because mgbuild containers
# support nproc
# shellcheck disable=SC2016
if [[ "$threads" == 0 ]]; then
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$(nproc)'
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$(nproc) -B mgconsole'
else
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$threads'
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$threads -B mgconsole'
fi
}
package_memgraph() {
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local build_container="mgbuild_${toolchain_version}_${os}"
local container_output_dir="$MGBUILD_ROOT_DIR/build/output"
local package_command=""
if [[ "$os" =~ ^"centos".* ]] || [[ "$os" =~ ^"fedora".* ]] || [[ "$os" =~ ^"amzn".* ]] || [[ "$os" =~ ^"rocky".* ]]; then
docker exec -u root "$build_container" bash -c "yum -y update"
package_command=" cpack -G RPM --config ../CPackConfig.cmake && rpmlint --file='../../release/rpm/rpmlintrc' memgraph*.rpm "
fi
if [[ "$os" =~ ^"debian".* ]]; then
docker exec -u root "$build_container" bash -c "apt --allow-releaseinfo-change -y update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
if [[ "$os" =~ ^"ubuntu".* ]]; then
docker exec -u root "$build_container" bash -c "apt update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
docker exec -u mg "$build_container" bash -c "mkdir -p $container_output_dir && cd $container_output_dir && $ACTIVATE_TOOLCHAIN && $package_command"
}
copy_memgraph() {
local build_container="mgbuild_${toolchain_version}_${os}"
case "$1" in
--binary)
echo "Copying memgraph binary to host..."
local container_output_path="$MGBUILD_ROOT_DIR/build/memgraph"
local host_output_path="$PROJECT_ROOT/build/memgraph"
mkdir -p "$PROJECT_ROOT/build"
docker cp -L $build_container:$container_output_path $host_output_path
echo "Binary saved to $host_output_path"
;;
--build-logs)
echo "Copying memgraph build logs to host..."
local container_output_path="$MGBUILD_ROOT_DIR/build/logs"
local host_output_path="$PROJECT_ROOT/build/logs"
mkdir -p "$PROJECT_ROOT/build"
docker cp -L $build_container:$container_output_path $host_output_path
echo "Build logs saved to $host_output_path"
;;
--package)
echo "Copying memgraph package to host..."
local container_output_dir="$MGBUILD_ROOT_DIR/build/output"
local host_output_dir="$PROJECT_ROOT/build/output/$os"
local last_package_name=$(docker exec -u mg "$build_container" bash -c "cd $container_output_dir && ls -t memgraph* | head -1")
mkdir -p "$host_output_dir"
docker cp "$build_container:$container_output_dir/$last_package_name" "$host_output_dir/$last_package_name"
echo "Package saved to $host_output_dir/$last_package_name"
;;
*)
echo "Error: Unknown flag '$1'"
exit 1
;;
esac
}
##################################################
##################### TESTS ######################
##################################################
test_memgraph() {
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_VENV="./setup.sh /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_CARGO="source $MGBUILD_HOME_DIR/.cargo/env"
local EXPORT_LICENSE="export MEMGRAPH_ENTERPRISE_LICENSE=$enterprise_license"
local EXPORT_ORG_NAME="export MEMGRAPH_ORGANIZATION_NAME=$organization_name"
local BUILD_DIR="$MGBUILD_ROOT_DIR/build"
local build_container="mgbuild_${toolchain_version}_${os}"
echo "Running $1 test on $build_container..."
case "$1" in
unit)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN "'&& ctest -R memgraph__unit --output-on-failure -j$threads'
;;
unit-coverage)
local setup_lsan_ubsan="export LSAN_OPTIONS=suppressions=$BUILD_DIR/../tools/lsan.supp && export UBSAN_OPTIONS=halt_on_error=1"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN && $setup_lsan_ubsan "'&& ctest -R memgraph__unit --output-on-failure -j2'
;;
leftover-CTest)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN "'&& ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure'
;;
drivers)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& ./tests/drivers/run.sh'
;;
drivers-high-availability)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& ./tests/drivers/run_cluster.sh'
;;
integration)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& tests/integration/run.sh'
;;
cppcheck-and-clang-format)
local test_output_path="$MGBUILD_ROOT_DIR/tools/github/cppcheck_and_clang_format.txt"
local test_output_host_dest="$PROJECT_ROOT/tools/github/cppcheck_and_clang_format.txt"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tools/github && $ACTIVATE_TOOLCHAIN "'&& ./cppcheck_and_clang_format diff'
docker cp $build_container:$test_output_path $test_output_host_dest
;;
stress-plain)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& ./continuous_integration'
;;
stress-ssl)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& ./continuous_integration --use-ssl'
;;
durability)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& python3 durability --num-steps 5'
;;
gql-behave)
local test_output_dir="$MGBUILD_ROOT_DIR/tests/gql_behave"
local test_output_host_dest="$PROJECT_ROOT/tests/gql_behave"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests && $ACTIVATE_VENV && cd $MGBUILD_ROOT_DIR/tests/gql_behave "'&& ./continuous_integration'
docker cp $build_container:$test_output_dir/gql_behave_status.csv $test_output_host_dest/gql_behave_status.csv
docker cp $build_container:$test_output_dir/gql_behave_status.html $test_output_host_dest/gql_behave_status.html
;;
macro-benchmark)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && export USER=mg && export LANG=$(echo $LANG) && cd $MGBUILD_ROOT_DIR/tests/macro_benchmark "'&& ./harness QuerySuite MemgraphRunner --groups aggregation 1000_create unwind_create dense_expand match --no-strict'
;;
mgbench)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/mgbench "'&& ./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*'
;;
upload-to-bench-graph)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
local SETUP_VE3_ENV="virtualenv -p python3 ve3 && source ve3/bin/activate && pip install -r requirements.txt"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tools/bench-graph-client && $SETUP_VE3_ENV && $SETUP_PASSED_ARGS "'&& ./main.py $PASSED_ARGS'
;;
code-analysis)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/code_analysis && $SETUP_PASSED_ARGS "'&& ./python_code_analysis.sh $PASSED_ARGS'
;;
code-coverage)
local test_output_path="$MGBUILD_ROOT_DIR/tools/github/generated/code_coverage.tar.gz"
local test_output_host_dest="$PROJECT_ROOT/tools/github/generated/code_coverage.tar.gz"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && $ACTIVATE_TOOLCHAIN && cd $MGBUILD_ROOT_DIR/tools/github "'&& ./coverage_convert'
docker exec -u mg $build_container bash -c "cd $MGBUILD_ROOT_DIR/tools/github/generated && tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu"
mkdir -p $PROJECT_ROOT/tools/github/generated
docker cp $build_container:$test_output_path $test_output_host_dest
;;
clang-tidy)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && export THREADS=$threads && $ACTIVATE_TOOLCHAIN && cd $MGBUILD_ROOT_DIR/tests/code_analysis && $SETUP_PASSED_ARGS "'&& ./clang_tidy.sh $PASSED_ARGS'
;;
e2e)
# local kafka_container="kafka_kafka_1"
# local kafka_hostname="kafka"
# local pulsar_container="pulsar_pulsar_1"
# local pulsar_hostname="pulsar"
# local setup_hostnames="export KAFKA_HOSTNAME=$kafka_hostname && PULSAR_HOSTNAME=$pulsar_hostname"
# local build_container_network=$(docker inspect $build_container --format='{{ .HostConfig.NetworkMode }}')
# docker network connect --alias $kafka_hostname $build_container_network $kafka_container > /dev/null 2>&1 || echo "Kafka container already inside correct network or something went wrong ..."
# docker network connect --alias $pulsar_hostname $build_container_network $pulsar_container > /dev/null 2>&1 || echo "Kafka container already inside correct network or something went wrong ..."
docker exec -u mg $build_container bash -c "pip install --user networkx && pip3 install --user networkx"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && $ACTIVATE_CARGO && cd $MGBUILD_ROOT_DIR/tests && $ACTIVATE_VENV && source ve3/bin/activate_e2e && cd $MGBUILD_ROOT_DIR/tests/e2e "'&& ./run.sh'
;;
*)
echo "Error: Unknown test '$1'"
exit 1
;;
esac
}
##################################################
################### PARSE ARGS ###################
##################################################
if [ "$#" -eq 0 ] || [ "$1" == "-h" ] || [ "$1" == "--help" ]; then
print_help
exit 0
fi
arch=$DEFAULT_ARCH
build_type=$DEFAULT_BUILD_TYPE
enterprise_license=$DEFAULT_ENTERPRISE_LICENSE
organization_name=$DEFAULT_ORGANIZATION_NAME
os=$DEFAULT_OS
threads=$DEFAULT_THREADS
toolchain_version=$DEFAULT_TOOLCHAIN
command=""
while [[ $# -gt 0 ]]; do
case "$1" in
--arch)
arch=$2
check_support arch $arch
shift 2
;;
--build-type)
build_type=$2
check_support build_type $build_type
shift 2
;;
--enterprise-license)
enterprise_license=$2
shift 2
;;
--organization-name)
organization_name=$2
shift 2
;;
--os)
os=$2
check_support os $os
shift 2
;;
--threads)
threads=$2
shift 2
;;
--toolchain)
toolchain_version=$2
check_support toolchain $toolchain_version
shift 2
;;
*)
if [[ "$1" =~ ^--.* ]]; then
echo -e "Error: Unknown option '$1'"
exit 1
else
command=$1
shift 1
break
fi
;;
esac
done
check_support os_toolchain_combo $os $toolchain_version
if [[ "$command" == "" ]]; then
echo -e "Error: Command not provided, please provide command"
print_help
exit 1
fi
if docker compose version > /dev/null 2>&1; then
docker_compose_cmd="docker compose"
elif which docker-compose > /dev/null 2>&1; then
docker_compose_cmd="docker-compose"
else
echo -e "Missing command: There has to be installed either 'docker-compose' or 'docker compose'"
exit 1
fi
echo "Using $docker_compose_cmd"
##################################################
################# PARSE COMMAND ##################
##################################################
case $command in
build)
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml build
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml build mgbuild_${toolchain_version}_${os}
fi
;;
run)
cd $SCRIPT_DIR
pull=false
if [[ "$#" -gt 0 ]]; then
if [[ "$1" == "--pull" ]]; then
pull=true
else
echo "Error: Unknown flag '$1'"
exit 1
fi
fi
if [[ "$os" == "all" ]]; then
if [[ "$pull" == "true" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures
elif [[ "$docker_compose_cmd" == "docker compose" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures --policy missing
fi
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml up -d
else
if [[ "$pull" == "true" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull mgbuild_${toolchain_version}_${os}
elif ! docker image inspect memgraph/mgbuild:${toolchain_version}_${os} > /dev/null 2>&1; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures mgbuild_${toolchain_version}_${os}
fi
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml up -d mgbuild_${toolchain_version}_${os}
fi
;;
stop)
cd $SCRIPT_DIR
remove=false
if [[ "$#" -gt 0 ]]; then
if [[ "$1" == "--remove" ]]; then
remove=true
else
echo "Error: Unknown flag '$1'"
exit 1
fi
fi
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml down
else
docker stop mgbuild_${toolchain_version}_${os}
if [[ "$remove" == "true" ]]; then
docker rm mgbuild_${toolchain_version}_${os}
fi
fi
;;
pull)
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull mgbuild_${toolchain_version}_${os}
fi
;;
push)
docker login $@
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml push --ignore-push-failures
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml push mgbuild_${toolchain_version}_${os}
fi
;;
build-memgraph)
build_memgraph $@
;;
package-memgraph)
package_memgraph
;;
test-memgraph)
test_memgraph $@
;;
copy)
copy_memgraph $@
;;
*)
echo "Error: Unknown command '$command'"
exit 1
;;
esac

View File

@ -0,0 +1,40 @@
FROM rockylinux:9.3
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/rocky-9.3.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/rocky-9.3.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9.3)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -1,208 +0,0 @@
#!/bin/bash
set -Eeuo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
SUPPORTED_OS=(
centos-7 centos-9
debian-10 debian-11 debian-11-arm
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
fedora-36
amzn-2
)
SUPPORTED_BUILD_TYPES=(
Debug
Release
RelWithDebInfo
)
PROJECT_ROOT="$SCRIPT_DIR/../.."
TOOLCHAIN_VERSION="toolchain-v4"
ACTIVATE_TOOLCHAIN="source /opt/${TOOLCHAIN_VERSION}/activate"
HOST_OUTPUT_DIR="$PROJECT_ROOT/build/output"
print_help () {
# TODO(gitbuda): Update the release/package/run.sh help
echo "$0 init|package|docker|test {os} {build_type} [--for-docker|--for-platform]"
echo ""
echo " OSs: ${SUPPORTED_OS[*]}"
echo " Build types: ${SUPPORTED_BUILD_TYPES[*]}"
exit 1
}
make_package () {
os="$1"
build_type="$2"
build_container="mgbuild_$os"
echo "Building Memgraph for $os on $build_container..."
package_command=""
if [[ "$os" =~ ^"centos".* ]] || [[ "$os" =~ ^"fedora".* ]] || [[ "$os" =~ ^"amzn".* ]]; then
docker exec "$build_container" bash -c "yum -y update"
package_command=" cpack -G RPM --config ../CPackConfig.cmake && rpmlint --file='../../release/rpm/rpmlintrc' memgraph*.rpm "
fi
if [[ "$os" =~ ^"debian".* ]]; then
docker exec "$build_container" bash -c "apt --allow-releaseinfo-change -y update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
if [[ "$os" =~ ^"ubuntu".* ]]; then
docker exec "$build_container" bash -c "apt update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
telemetry_id_override_flag=""
if [[ "$#" -gt 2 ]]; then
if [[ "$3" == "--for-docker" ]]; then
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER "
elif [[ "$3" == "--for-platform" ]]; then
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM"
else
print_help
exit
fi
fi
echo "Copying project files..."
# If master is not the current branch, fetch it, because the get_version
# script depends on it. If we are on master, the fetch command is going to
# fail so that's why there is the explicit check.
# Required here because Docker build container can't access remote.
cd "$PROJECT_ROOT"
if [[ "$(git rev-parse --abbrev-ref HEAD)" != "master" ]]; then
git fetch origin master:master
fi
# Ensure we have a clean build directory
docker exec "$build_container" rm -rf /memgraph
docker exec "$build_container" mkdir -p /memgraph
# TODO(gitbuda): Revisit copying the whole repo -> makese sense under CI.
docker cp "$PROJECT_ROOT/." "$build_container:/memgraph/"
container_build_dir="/memgraph/build"
container_output_dir="$container_build_dir/output"
# TODO(gitbuda): TOOLCHAIN_RUN_DEPS should be installed during the Docker
# image build phase, but that is not easy at this point because the
# environment/os/{os}.sh does not come within the toolchain package. When
# migrating to the next version of toolchain do that, and remove the
# TOOLCHAIN_RUN_DEPS installation from here.
# TODO(gitbuda): On the other side, having this here allows updating deps
# wihout reruning the build containers.
echo "Installing dependencies using '/memgraph/environment/os/$os.sh' script..."
docker exec "$build_container" bash -c "/memgraph/environment/os/$os.sh install TOOLCHAIN_RUN_DEPS"
docker exec "$build_container" bash -c "/memgraph/environment/os/$os.sh install MEMGRAPH_BUILD_DEPS"
echo "Building targeted package..."
# Fix issue with git marking directory as not safe
docker exec "$build_container" bash -c "cd /memgraph && git config --global --add safe.directory '*'"
docker exec "$build_container" bash -c "cd /memgraph && $ACTIVATE_TOOLCHAIN && ./init"
docker exec "$build_container" bash -c "cd $container_build_dir && rm -rf ./*"
# TODO(gitbuda): cmake fails locally if remote is clone via ssh because of the key -> FIX
if [[ "$os" =~ "-arm" ]]; then
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && cmake -DCMAKE_BUILD_TYPE=$build_type -DMG_ARCH="ARM64" $telemetry_id_override_flag .."
else
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && cmake -DCMAKE_BUILD_TYPE=$build_type $telemetry_id_override_flag .."
fi
# ' is used instead of " because we need to run make within the allowed
# container resources.
# shellcheck disable=SC2016
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN "'&& make -j$(nproc)'
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN "'&& make -j$(nproc) -B mgconsole'
docker exec "$build_container" bash -c "mkdir -p $container_output_dir && cd $container_output_dir && $ACTIVATE_TOOLCHAIN && $package_command"
echo "Copying targeted package to host..."
last_package_name=$(docker exec "$build_container" bash -c "cd $container_output_dir && ls -t memgraph* | head -1")
# The operating system folder is introduced because multiple different
# packages could be preserved during the same build "session".
mkdir -p "$HOST_OUTPUT_DIR/$os"
package_host_destination="$HOST_OUTPUT_DIR/$os/$last_package_name"
docker cp "$build_container:$container_output_dir/$last_package_name" "$package_host_destination"
echo "Package saved to $package_host_destination."
}
case "$1" in
init)
cd "$SCRIPT_DIR"
if ! which "docker-compose" >/dev/null; then
docker_compose_cmd="docker compose"
else
docker_compose_cmd="docker-compose"
fi
$docker_compose_cmd build --build-arg TOOLCHAIN_VERSION="${TOOLCHAIN_VERSION}"
$docker_compose_cmd up -d
;;
docker)
# NOTE: Docker is build on top of Debian 11 package.
based_on_os="debian-11"
# shellcheck disable=SC2012
last_package_name=$(cd "$HOST_OUTPUT_DIR/$based_on_os" && ls -t memgraph* | head -1)
docker_build_folder="$PROJECT_ROOT/release/docker"
cd "$docker_build_folder"
./package_docker --latest "$HOST_OUTPUT_DIR/$based_on_os/$last_package_name"
# shellcheck disable=SC2012
docker_image_name=$(cd "$docker_build_folder" && ls -t memgraph* | head -1)
docker_host_folder="$HOST_OUTPUT_DIR/docker"
docker_host_image_path="$docker_host_folder/$docker_image_name"
mkdir -p "$docker_host_folder"
cp "$docker_build_folder/$docker_image_name" "$docker_host_image_path"
echo "Docker images saved to $docker_host_image_path."
;;
package)
shift 1
if [[ "$#" -lt 2 ]]; then
print_help
fi
os="$1"
build_type="$2"
shift 2
is_os_ok=false
for supported_os in "${SUPPORTED_OS[@]}"; do
if [[ "$supported_os" == "${os}" ]]; then
is_os_ok=true
break
fi
done
is_build_type_ok=false
for supported_build_type in "${SUPPORTED_BUILD_TYPES[@]}"; do
if [[ "$supported_build_type" == "${build_type}" ]]; then
is_build_type_ok=true
break
fi
done
if [[ "$is_os_ok" == true && "$is_build_type_ok" == true ]]; then
make_package "$os" "$build_type" "$@"
else
if [[ "$is_os_ok" == false ]]; then
echo "Unsupported OS: $os"
elif [[ "$is_build_type_ok" == false ]]; then
echo "Unsupported build type: $build_type"
fi
print_help
fi
;;
build)
shift 1
if [[ "$#" -ne 2 ]]; then
print_help
fi
# in the vX format, e.g. v5
toolchain_version="$1"
# a name of the os folder, e.g. ubuntu-22.04-arm
os="$2"
cd "$SCRIPT_DIR/$os"
docker build -f Dockerfile --build-arg TOOLCHAIN_VERSION="toolchain-$toolchain_version" -t "memgraph/memgraph-builder:${toolchain_version}_$os" .
;;
test)
echo "TODO(gitbuda): Test all packages on mgtest containers."
;;
*)
print_help
;;
esac

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-18.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-18.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-20.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-20.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-22.04-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-22.04-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-22.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-22.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -22,8 +22,10 @@ add_subdirectory(dbms)
add_subdirectory(flags)
add_subdirectory(distributed)
add_subdirectory(replication)
add_subdirectory(replication_handler)
add_subdirectory(coordination)
add_subdirectory(replication_coordination_glue)
add_subdirectory(system)
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
@ -43,10 +45,10 @@ set(mg_single_node_v2_sources
add_executable(memgraph ${mg_single_node_v2_sources})
target_include_directories(memgraph PUBLIC ${CMAKE_SOURCE_DIR}/include)
target_link_libraries(memgraph stdc++fs Threads::Threads
mg-telemetry mg-communication mg-communication-metrics mg-memory mg-utils mg-license mg-settings mg-glue mg-flags)
mg-telemetry mgcxx_text_search tantivy_text_search mg-communication mg-communication-metrics mg-memory mg-utils mg-license mg-settings mg-glue mg-flags mg::system mg::replication_handler)
# NOTE: `include/mg_procedure.syms` describes a pattern match for symbols which
# should be dynamically exported, so that `dlopen` can correctly link the
# should be dynamically exported, so that `dlopen` can correctly link th
# symbols in custom procedure module libraries.
target_link_libraries(memgraph "-Wl,--dynamic-list=${CMAKE_SOURCE_DIR}/include/mg_procedure.syms")
set_target_properties(memgraph PROPERTIES

View File

@ -2,7 +2,9 @@ set(auth_src_files
auth.cpp
crypto.cpp
models.cpp
module.cpp)
module.cpp
rpc.cpp
replication_handlers.cpp)
find_package(Seccomp REQUIRED)
find_package(fmt REQUIRED)
@ -11,7 +13,7 @@ find_package(gflags REQUIRED)
add_library(mg-auth STATIC ${auth_src_files})
target_link_libraries(mg-auth json libbcrypt gflags fmt::fmt)
target_link_libraries(mg-auth mg-utils mg-kvstore mg-license )
target_link_libraries(mg-auth mg-utils mg-kvstore mg-license mg::system mg-replication)
target_link_libraries(mg-auth ${Seccomp_LIBRARIES})
target_include_directories(mg-auth SYSTEM PRIVATE ${Seccomp_INCLUDE_DIRS})

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -8,17 +8,18 @@
#include "auth/auth.hpp"
#include <cstring>
#include <iostream>
#include <limits>
#include <optional>
#include <utility>
#include <fmt/format.h>
#include "auth/crypto.hpp"
#include "auth/exceptions.hpp"
#include "auth/rpc.hpp"
#include "license/license.hpp"
#include "system/transaction.hpp"
#include "utils/flag_validation.hpp"
#include "utils/logging.hpp"
#include "utils/message.hpp"
#include "utils/settings.hpp"
#include "utils/string.hpp"
@ -34,18 +35,124 @@ DEFINE_VALIDATED_string(auth_module_executable, "", "Absolute path to the auth m
}
return true;
});
DEFINE_bool(auth_module_create_missing_user, true, "Set to false to disable creation of missing users.");
DEFINE_bool(auth_module_create_missing_role, true, "Set to false to disable creation of missing roles.");
DEFINE_bool(auth_module_manage_roles, true, "Set to false to disable management of roles through the auth module.");
DEFINE_VALIDATED_int32(auth_module_timeout_ms, 10000,
"Timeout (in milliseconds) used when waiting for a "
"response from the auth module.",
FLAG_IN_RANGE(100, 1800000));
// DEPRECATED FLAGS
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(auth_module_create_missing_user, true,
"Set to false to disable creation of missing users.", {
spdlog::warn(
"auth_module_create_missing_user flag is deprecated. It not possible to create "
"users through the module anymore.");
return true;
});
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(auth_module_create_missing_role, true,
"Set to false to disable creation of missing roles.", {
spdlog::warn(
"auth_module_create_missing_role flag is deprecated. It not possible to create "
"roles through the module anymore.");
return true;
});
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(
auth_module_manage_roles, true, "Set to false to disable management of roles through the auth module.", {
spdlog::warn(
"auth_module_manage_roles flag is deprecated. It not possible to create roles through the module anymore.");
return true;
});
namespace memgraph::auth {
const Auth::Epoch Auth::kStartEpoch = 1;
namespace {
#ifdef MG_ENTERPRISE
/**
* REPLICATION SYSTEM ACTION IMPLEMENTATIONS
*/
struct UpdateAuthData : memgraph::system::ISystemAction {
explicit UpdateAuthData(User user) : user_{std::move(user)}, role_{std::nullopt} {}
explicit UpdateAuthData(Role role) : user_{std::nullopt}, role_{std::move(role)} {}
void DoDurability() override { /* Done during Auth execution */
}
bool DoReplication(replication::ReplicationClient &client, const utils::UUID &main_uuid,
replication::ReplicationEpoch const &epoch,
memgraph::system::Transaction const &txn) const override {
auto check_response = [](const replication::UpdateAuthDataRes &response) { return response.success; };
if (user_) {
return client.SteamAndFinalizeDelta<replication::UpdateAuthDataRpc>(
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
*user_);
}
if (role_) {
return client.SteamAndFinalizeDelta<replication::UpdateAuthDataRpc>(
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
*role_);
}
// Should never get here
MG_ASSERT(false, "Trying to update auth data that is not a user nor a role");
return {};
}
void PostReplication(replication::RoleMainData &mainData) const override {}
private:
std::optional<User> user_;
std::optional<Role> role_;
};
struct DropAuthData : memgraph::system::ISystemAction {
enum class AuthDataType { USER, ROLE };
explicit DropAuthData(AuthDataType type, std::string_view name) : type_{type}, name_{name} {}
void DoDurability() override { /* Done during Auth execution */
}
bool DoReplication(replication::ReplicationClient &client, const utils::UUID &main_uuid,
replication::ReplicationEpoch const &epoch,
memgraph::system::Transaction const &txn) const override {
auto check_response = [](const replication::DropAuthDataRes &response) { return response.success; };
memgraph::replication::DropAuthDataReq::DataType type{};
switch (type_) {
case AuthDataType::USER:
type = memgraph::replication::DropAuthDataReq::DataType::USER;
break;
case AuthDataType::ROLE:
type = memgraph::replication::DropAuthDataReq::DataType::ROLE;
break;
}
return client.SteamAndFinalizeDelta<replication::DropAuthDataRpc>(
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
type, name_);
}
void PostReplication(replication::RoleMainData &mainData) const override {}
private:
AuthDataType type_;
std::string name_;
};
#endif
/**
* CONSTANTS
*/
const std::string kUserPrefix = "user:";
const std::string kRolePrefix = "role:";
const std::string kLinkPrefix = "link:";
const std::string kVersion = "version";
static constexpr auto kVersionV1 = "V1";
} // namespace
/**
* All data stored in the `Auth` storage is stored in an underlying
@ -64,10 +171,76 @@ const std::string kLinkPrefix = "link:";
* key="link:<username>", value="<rolename>"
*/
Auth::Auth(const std::string &storage_directory) : storage_(storage_directory), module_(FLAGS_auth_module_executable) {}
namespace {
void MigrateVersions(kvstore::KVStore &store) {
static constexpr auto kPasswordHashV0V1 = "password_hash";
auto version_str = store.Get(kVersion);
std::optional<User> Auth::Authenticate(const std::string &username, const std::string &password) {
if (!version_str) {
using namespace std::string_literals;
// pre versioning, add version to the store
auto puts = std::map<std::string, std::string>{{kVersion, kVersionV1}};
// also add hash kind into durability
auto it = store.begin(kUserPrefix);
auto const e = store.end(kUserPrefix);
if (it != e) {
const auto hash_algo = CurrentHashAlgorithm();
spdlog::info("Updating auth durability, assuming previously stored as {}", AsString(hash_algo));
for (; it != e; ++it) {
auto const &[key, value] = *it;
try {
auto user_data = nlohmann::json::parse(value);
auto password_hash = user_data[kPasswordHashV0V1];
if (!password_hash.is_string()) {
throw AuthException("Couldn't load user data!");
}
// upgrade the password_hash to include the hash algortihm
if (password_hash.empty()) {
user_data[kPasswordHashV0V1] = nullptr;
} else {
user_data[kPasswordHashV0V1] = HashedPassword{hash_algo, password_hash};
}
puts.emplace(key, user_data.dump());
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load user data!");
}
}
}
// Perform migration to V1
store.PutMultiple(puts);
version_str = kVersionV1;
}
}
auto ParseJson(std::string_view str) {
nlohmann::json data;
try {
data = nlohmann::json::parse(str);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load auth data!");
}
return data;
}
}; // namespace
Auth::Auth(std::string storage_directory, Config config)
: storage_(std::move(storage_directory)), module_(FLAGS_auth_module_executable), config_{std::move(config)} {
MigrateVersions(storage_);
}
std::optional<UserOrRole> Auth::Authenticate(const std::string &username, const std::string &password) {
if (module_.IsUsed()) {
/*
* MODULE AUTH STORAGE
*/
const auto license_check_result = license::global_license_checker.IsEnterpriseValid(utils::global_settings);
if (license_check_result.HasError()) {
spdlog::warn(license::LicenseCheckErrorToString(license_check_result.GetError(), "authentication modules"));
@ -92,98 +265,64 @@ std::optional<User> Auth::Authenticate(const std::string &username, const std::s
auto is_authenticated = ret_authenticated.get<bool>();
const auto &rolename = ret_role.get<std::string>();
// Check if role is present
auto role = GetRole(rolename);
if (!role) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the role '{}' doesn't exist.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
// Authenticate the user.
if (!is_authenticated) return std::nullopt;
// Find or create the user and return it.
auto user = GetUser(username);
if (!user) {
if (FLAGS_auth_module_create_missing_user) {
user = AddUser(username, password);
if (!user) {
spdlog::warn(utils::MessageWithLink(
"Couldn't create the missing user '{}' using the auth module because the user already exists as a role.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
} else {
spdlog::warn(utils::MessageWithLink(
"Couldn't authenticate user '{}' using the auth module because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
} else {
user->UpdatePassword(password);
}
if (FLAGS_auth_module_manage_roles) {
if (!rolename.empty()) {
auto role = GetRole(rolename);
if (!role) {
if (FLAGS_auth_module_create_missing_role) {
role = AddRole(rolename);
if (!role) {
spdlog::warn(
utils::MessageWithLink("Couldn't authenticate user '{}' using the auth module because the user's "
"role '{}' already exists as a user.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
SaveRole(*role);
} else {
spdlog::warn(utils::MessageWithLink(
"Couldn't authenticate user '{}' using the auth module because the user's role '{}' doesn't exist.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
}
user->SetRole(*role);
} else {
user->ClearRole();
}
}
SaveUser(*user);
return user;
} else {
auto user = GetUser(username);
if (!user) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
if (!user->CheckPassword(password)) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the password is not correct.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
return user;
return RoleWUsername{username, std::move(*role)};
}
/*
* LOCAL AUTH STORAGE
*/
auto user = GetUser(username);
if (!user) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
if (!user->CheckPassword(password)) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the password is not correct.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
if (user->UpgradeHash(password)) {
SaveUser(*user);
}
return user;
}
std::optional<User> Auth::GetUser(const std::string &username_orig) const {
auto username = utils::ToLowerCase(username_orig);
auto existing_user = storage_.Get(kUserPrefix + username);
if (!existing_user) return std::nullopt;
nlohmann::json data;
try {
data = nlohmann::json::parse(*existing_user);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load user data!");
}
auto user = User::Deserialize(data);
auto link = storage_.Get(kLinkPrefix + username);
void Auth::LinkUser(User &user) const {
auto link = storage_.Get(kLinkPrefix + user.username());
if (link) {
auto role = GetRole(*link);
if (role) {
user.SetRole(*role);
}
}
}
std::optional<User> Auth::GetUser(const std::string &username_orig) const {
if (module_.IsUsed()) return std::nullopt; // User's are not supported when using module
auto username = utils::ToLowerCase(username_orig);
auto existing_user = storage_.Get(kUserPrefix + username);
if (!existing_user) return std::nullopt;
auto user = User::Deserialize(ParseJson(*existing_user));
LinkUser(user);
return user;
}
void Auth::SaveUser(const User &user) {
void Auth::SaveUser(const User &user, system::Transaction *system_tx) {
DisableIfModuleUsed();
bool success = false;
if (const auto *role = user.role(); role != nullptr) {
success = storage_.PutMultiple(
@ -195,26 +334,83 @@ void Auth::SaveUser(const User &user) {
if (!success) {
throw AuthException("Couldn't save user '{}'!", user.username());
}
// Durability updated -> new epoch
UpdateEpoch();
// All changes to the user end up calling this function, so no need to add a delta anywhere else
if (system_tx) {
#ifdef MG_ENTERPRISE
system_tx->AddAction<UpdateAuthData>(user);
#endif
}
}
std::optional<User> Auth::AddUser(const std::string &username, const std::optional<std::string> &password) {
void Auth::UpdatePassword(auth::User &user, const std::optional<std::string> &password) {
DisableIfModuleUsed();
// Check if null
if (!password) {
if (!config_.password_permit_null) {
throw AuthException("Null passwords aren't permitted!");
}
} else {
// Check if compliant with our filter
if (config_.custom_password_regex) {
if (const auto license_check_result = license::global_license_checker.IsEnterpriseValid(utils::global_settings);
license_check_result.HasError()) {
throw AuthException(
"Custom password regex is a Memgraph Enterprise feature. Please set the config "
"(\"--auth-password-strength-regex\") to its default value (\"{}\") or remove the flag.\n{}",
glue::kDefaultPasswordRegex,
license::LicenseCheckErrorToString(license_check_result.GetError(), "password regex"));
}
}
if (!std::regex_match(*password, config_.password_regex)) {
throw AuthException(
"The user password doesn't conform to the required strength! Regex: "
"\"{}\"",
config_.password_regex_str);
}
}
// All checks passed; update
user.UpdatePassword(password);
}
std::optional<User> Auth::AddUser(const std::string &username, const std::optional<std::string> &password,
system::Transaction *system_tx) {
DisableIfModuleUsed();
if (!NameRegexMatch(username)) {
throw AuthException("Invalid user name.");
}
auto existing_user = GetUser(username);
if (existing_user) return std::nullopt;
auto existing_role = GetRole(username);
if (existing_role) return std::nullopt;
auto new_user = User(username);
new_user.UpdatePassword(password);
SaveUser(new_user);
UpdatePassword(new_user, password);
SaveUser(new_user, system_tx);
return new_user;
}
bool Auth::RemoveUser(const std::string &username_orig) {
bool Auth::RemoveUser(const std::string &username_orig, system::Transaction *system_tx) {
DisableIfModuleUsed();
auto username = utils::ToLowerCase(username_orig);
if (!storage_.Get(kUserPrefix + username)) return false;
std::vector<std::string> keys({kLinkPrefix + username, kUserPrefix + username});
if (!storage_.DeleteMultiple(keys)) {
throw AuthException("Couldn't remove user '{}'!", username);
}
// Durability updated -> new epoch
UpdateEpoch();
// Handling drop user delta
if (system_tx) {
#ifdef MG_ENTERPRISE
system_tx->AddAction<DropAuthData>(DropAuthData::AuthDataType::USER, username);
#endif
}
return true;
}
@ -223,9 +419,28 @@ std::vector<auth::User> Auth::AllUsers() const {
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (username != utils::ToLowerCase(username)) continue;
auto user = GetUser(username);
if (user) {
ret.push_back(std::move(*user));
try {
User user = auth::User::Deserialize(ParseJson(it->second)); // Will throw on failure
LinkUser(user);
ret.emplace_back(std::move(user));
} catch (AuthException &) {
continue;
}
}
return ret;
}
std::vector<std::string> Auth::AllUsernames() const {
std::vector<std::string> ret;
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (username != utils::ToLowerCase(username)) continue;
try {
// Check if serialized correctly
memgraph::auth::User::Deserialize(ParseJson(it->second)); // Will throw on failure
ret.emplace_back(std::move(username));
} catch (AuthException &) {
continue;
}
}
return ret;
@ -233,36 +448,44 @@ std::vector<auth::User> Auth::AllUsers() const {
bool Auth::HasUsers() const { return storage_.begin(kUserPrefix) != storage_.end(kUserPrefix); }
bool Auth::AccessControlled() const { return HasUsers() || module_.IsUsed(); }
std::optional<Role> Auth::GetRole(const std::string &rolename_orig) const {
auto rolename = utils::ToLowerCase(rolename_orig);
auto existing_role = storage_.Get(kRolePrefix + rolename);
if (!existing_role) return std::nullopt;
nlohmann::json data;
try {
data = nlohmann::json::parse(*existing_role);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load role data!");
}
return Role::Deserialize(data);
return Role::Deserialize(ParseJson(*existing_role));
}
void Auth::SaveRole(const Role &role) {
void Auth::SaveRole(const Role &role, system::Transaction *system_tx) {
if (!storage_.Put(kRolePrefix + role.rolename(), role.Serialize().dump())) {
throw AuthException("Couldn't save role '{}'!", role.rolename());
}
// Durability updated -> new epoch
UpdateEpoch();
// All changes to the role end up calling this function, so no need to add a delta anywhere else
if (system_tx) {
#ifdef MG_ENTERPRISE
system_tx->AddAction<UpdateAuthData>(role);
#endif
}
}
std::optional<Role> Auth::AddRole(const std::string &rolename) {
std::optional<Role> Auth::AddRole(const std::string &rolename, system::Transaction *system_tx) {
if (!NameRegexMatch(rolename)) {
throw AuthException("Invalid role name.");
}
if (auto existing_role = GetRole(rolename)) return std::nullopt;
if (auto existing_user = GetUser(rolename)) return std::nullopt;
auto new_role = Role(rolename);
SaveRole(new_role);
SaveRole(new_role, system_tx);
return new_role;
}
bool Auth::RemoveRole(const std::string &rolename_orig) {
bool Auth::RemoveRole(const std::string &rolename_orig, system::Transaction *system_tx) {
auto rolename = utils::ToLowerCase(rolename_orig);
if (!storage_.Get(kRolePrefix + rolename)) return false;
std::vector<std::string> keys;
@ -275,6 +498,16 @@ bool Auth::RemoveRole(const std::string &rolename_orig) {
if (!storage_.DeleteMultiple(keys)) {
throw AuthException("Couldn't remove role '{}'!", rolename);
}
// Durability updated -> new epoch
UpdateEpoch();
// Handling drop role delta
if (system_tx) {
#ifdef MG_ENTERPRISE
system_tx->AddAction<DropAuthData>(DropAuthData::AuthDataType::ROLE, rolename);
#endif
}
return true;
}
@ -283,16 +516,30 @@ std::vector<auth::Role> Auth::AllRoles() const {
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
if (rolename != utils::ToLowerCase(rolename)) continue;
if (auto role = GetRole(rolename)) {
ret.push_back(*role);
} else {
throw AuthException("Couldn't load role '{}'!", rolename);
Role role = memgraph::auth::Role::Deserialize(ParseJson(it->second)); // Will throw on failure
ret.emplace_back(std::move(role));
}
return ret;
}
std::vector<std::string> Auth::AllRolenames() const {
std::vector<std::string> ret;
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
if (rolename != utils::ToLowerCase(rolename)) continue;
try {
// Check that the data is serialized correctly
memgraph::auth::Role::Deserialize(ParseJson(it->second));
ret.emplace_back(std::move(rolename));
} catch (AuthException &) {
continue;
}
}
return ret;
}
std::vector<auth::User> Auth::AllUsersForRole(const std::string &rolename_orig) const {
DisableIfModuleUsed();
const auto rolename = utils::ToLowerCase(rolename_orig);
std::vector<auth::User> ret;
for (auto it = storage_.begin(kLinkPrefix); it != storage_.end(kLinkPrefix); ++it) {
@ -311,52 +558,192 @@ std::vector<auth::User> Auth::AllUsersForRole(const std::string &rolename_orig)
}
#ifdef MG_ENTERPRISE
bool Auth::GrantDatabaseToUser(const std::string &db, const std::string &name) {
if (auto user = GetUser(name)) {
if (db == kAllDatabases) {
user->db_access().GrantAll();
} else {
user->db_access().Add(db);
Auth::Result Auth::GrantDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
GrantDatabase(db, *role, system_tx);
return SUCCESS;
}
SaveUser(*user);
return true;
return NO_ROLE;
}
return false;
if (auto user = GetUser(name)) {
GrantDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
GrantDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
bool Auth::RevokeDatabaseFromUser(const std::string &db, const std::string &name) {
if (auto user = GetUser(name)) {
if (db == kAllDatabases) {
user->db_access().DenyAll();
} else {
user->db_access().Remove(db);
}
SaveUser(*user);
return true;
void Auth::GrantDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().GrantAll();
} else {
user.db_access().Grant(db);
}
return false;
SaveUser(user, system_tx);
}
void Auth::DeleteDatabase(const std::string &db) {
void Auth::GrantDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().GrantAll();
} else {
role.db_access().Grant(db);
}
SaveRole(role, system_tx);
}
Auth::Result Auth::DenyDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
DenyDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_ROLE;
}
if (auto user = GetUser(name)) {
DenyDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
DenyDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::DenyDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().DenyAll();
} else {
user.db_access().Deny(db);
}
SaveUser(user, system_tx);
}
void Auth::DenyDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().DenyAll();
} else {
role.db_access().Deny(db);
}
SaveRole(role, system_tx);
}
Auth::Result Auth::RevokeDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
RevokeDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_ROLE;
}
if (auto user = GetUser(name)) {
RevokeDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
RevokeDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::RevokeDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().RevokeAll();
} else {
user.db_access().Revoke(db);
}
SaveUser(user, system_tx);
}
void Auth::RevokeDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().RevokeAll();
} else {
role.db_access().Revoke(db);
}
SaveRole(role, system_tx);
}
void Auth::DeleteDatabase(const std::string &db, system::Transaction *system_tx) {
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (auto user = GetUser(username)) {
user->db_access().Delete(db);
SaveUser(*user);
try {
User user = auth::User::Deserialize(ParseJson(it->second));
LinkUser(user);
user.db_access().Revoke(db);
SaveUser(user, system_tx);
} catch (AuthException &) {
continue;
}
}
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
try {
auto role = memgraph::auth::Role::Deserialize(ParseJson(it->second));
role.db_access().Revoke(db);
SaveRole(role, system_tx);
} catch (AuthException &) {
continue;
}
}
}
bool Auth::SetMainDatabase(std::string_view db, const std::string &name) {
if (auto user = GetUser(name)) {
if (!user->db_access().SetDefault(db)) {
throw AuthException("Couldn't set default database '{}' for user '{}'!", db, name);
Auth::Result Auth::SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
SetMainDatabase(db, *role, system_tx);
return SUCCESS;
}
SaveUser(*user);
return true;
return NO_ROLE;
}
return false;
if (auto user = GetUser(name)) {
SetMainDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
SetMainDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::SetMainDatabase(std::string_view db, User &user, system::Transaction *system_tx) {
if (!user.db_access().SetMain(db)) {
throw AuthException("Couldn't set default database '{}' for '{}'!", db, user.username());
}
SaveUser(user, system_tx);
}
void Auth::SetMainDatabase(std::string_view db, Role &role, system::Transaction *system_tx) {
if (!role.db_access().SetMain(db)) {
throw AuthException("Couldn't set default database '{}' for '{}'!", db, role.rolename());
}
SaveRole(role, system_tx);
}
#endif
bool Auth::NameRegexMatch(const std::string &user_or_role) const {
if (config_.custom_name_regex) {
if (const auto license_check_result =
memgraph::license::global_license_checker.IsEnterpriseValid(memgraph::utils::global_settings);
license_check_result.HasError()) {
throw memgraph::auth::AuthException(
"Custom user/role regex is a Memgraph Enterprise feature. Please set the config "
"(\"--auth-user-or-role-name-regex\") to its default value (\"{}\") or remove the flag.\n{}",
glue::kDefaultUserRoleRegex,
memgraph::license::LicenseCheckErrorToString(license_check_result.GetError(), "user/role regex"));
}
}
return std::regex_match(user_or_role, config_.name_regex);
}
} // namespace memgraph::auth

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -10,18 +10,37 @@
#include <mutex>
#include <optional>
#include <regex>
#include <vector>
#include "auth/exceptions.hpp"
#include "auth/models.hpp"
#include "auth/module.hpp"
#include "glue/auth_global.hpp"
#include "kvstore/kvstore.hpp"
#include "system/action.hpp"
#include "utils/settings.hpp"
#include "utils/synchronized.hpp"
namespace memgraph::auth {
class Auth;
using SynchedAuth = memgraph::utils::Synchronized<memgraph::auth::Auth, memgraph::utils::WritePrioritizedRWLock>;
static const constexpr char *const kAllDatabases = "*";
struct RoleWUsername : Role {
template <typename... Args>
RoleWUsername(std::string_view username, Args &&...args) : Role{std::forward<Args>(args)...}, username_{username} {}
std::string username() { return username_; }
const std::string &username() const { return username_; }
private:
std::string username_;
};
using UserOrRole = std::variant<User, RoleWUsername>;
/**
* This class serves as the main Authentication/Authorization storage.
* It provides functions for managing Users, Roles, Permissions and FineGrainedAccessPermissions.
@ -31,7 +50,66 @@ static const constexpr char *const kAllDatabases = "*";
*/
class Auth final {
public:
explicit Auth(const std::string &storage_directory);
struct Config {
Config() {}
Config(std::string name_regex, std::string password_regex, bool password_permit_null)
: name_regex_str{std::move(name_regex)},
password_regex_str{std::move(password_regex)},
password_permit_null{password_permit_null},
custom_name_regex{name_regex_str != glue::kDefaultUserRoleRegex},
name_regex{name_regex_str},
custom_password_regex{password_regex_str != glue::kDefaultPasswordRegex},
password_regex{password_regex_str} {}
std::string name_regex_str{glue::kDefaultUserRoleRegex};
std::string password_regex_str{glue::kDefaultPasswordRegex};
bool password_permit_null{true};
private:
friend class Auth;
bool custom_name_regex{false};
std::regex name_regex{name_regex_str};
bool custom_password_regex{false};
std::regex password_regex{password_regex_str};
};
struct Epoch {
Epoch() : epoch_{0} {}
Epoch(unsigned e) : epoch_{e} {}
Epoch operator++() { return ++epoch_; }
bool operator==(const Epoch &rhs) const = default;
private:
unsigned epoch_;
};
static const Epoch kStartEpoch;
enum class Result {
SUCCESS,
NO_USER_ROLE,
NO_ROLE,
};
explicit Auth(std::string storage_directory, Config config);
/**
* @brief Set the Config object
*
* @param config
*/
void SetConfig(Config config) {
// NOTE: The Auth class itself is not thread-safe, higher-level code needs to synchronize it when using it.
config_ = std::move(config);
}
/**
* @brief
*
* @return Config
*/
Config GetConfig() const { return config_; }
/**
* Authenticates a user using his username and password.
@ -42,7 +120,7 @@ class Auth final {
* @return a user when the username and password match, nullopt otherwise
* @throw AuthException if unable to authenticate for whatever reason.
*/
std::optional<User> Authenticate(const std::string &username, const std::string &password);
std::optional<UserOrRole> Authenticate(const std::string &username, const std::string &password);
/**
* Gets a user from the storage.
@ -54,6 +132,8 @@ class Auth final {
*/
std::optional<User> GetUser(const std::string &username) const;
void LinkUser(User &user) const;
/**
* Saves a user object to the storage.
*
@ -61,7 +141,7 @@ class Auth final {
*
* @throw AuthException if unable to save the user.
*/
void SaveUser(const User &user);
void SaveUser(const User &user, system::Transaction *system_tx = nullptr);
/**
* Creates a user if the user doesn't exist.
@ -72,7 +152,8 @@ class Auth final {
* @return a user when the user is created, nullopt if the user exists
* @throw AuthException if unable to save the user.
*/
std::optional<User> AddUser(const std::string &username, const std::optional<std::string> &password = std::nullopt);
std::optional<User> AddUser(const std::string &username, const std::optional<std::string> &password = std::nullopt,
system::Transaction *system_tx = nullptr);
/**
* Removes a user from the storage.
@ -83,7 +164,15 @@ class Auth final {
* doesn't exist
* @throw AuthException if unable to remove the user.
*/
bool RemoveUser(const std::string &username);
bool RemoveUser(const std::string &username, system::Transaction *system_tx = nullptr);
/**
* @brief
*
* @param user
* @param password
*/
void UpdatePassword(auth::User &user, const std::optional<std::string> &password);
/**
* Gets all users from the storage.
@ -93,6 +182,13 @@ class Auth final {
*/
std::vector<User> AllUsers() const;
/**
* @brief
*
* @return std::vector<std::string>
*/
std::vector<std::string> AllUsernames() const;
/**
* Returns whether there are users in the storage.
*
@ -100,6 +196,13 @@ class Auth final {
*/
bool HasUsers() const;
/**
* Returns whether the access is controlled by authentication/authorization.
*
* @return `true` if auth needs to run
*/
bool AccessControlled() const;
/**
* Gets a role from the storage.
*
@ -110,6 +213,37 @@ class Auth final {
*/
std::optional<Role> GetRole(const std::string &rolename) const;
std::optional<UserOrRole> GetUserOrRole(const std::optional<std::string> &username,
const std::optional<std::string> &rolename) const {
auto expect = [](bool condition, std::string &&msg) {
if (!condition) throw AuthException(std::move(msg));
};
// Special case if we are using a module; we must find the specified role
if (module_.IsUsed()) {
expect(username && rolename, "When using a module, a role needs to be connected to a username.");
const auto role = GetRole(*rolename);
expect(role != std::nullopt, "No role named " + *rolename);
return UserOrRole(auth::RoleWUsername{*username, *role});
}
// First check if we need to find a role
if (username && rolename) {
const auto role = GetRole(*rolename);
expect(role != std::nullopt, "No role named " + *rolename);
return UserOrRole(auth::RoleWUsername{*username, *role});
}
// We are only looking for a user
if (username) {
const auto user = GetUser(*username);
expect(user != std::nullopt, "No user named " + *username);
return *user;
}
// No user or role
return std::nullopt;
}
/**
* Saves a role object to the storage.
*
@ -117,7 +251,7 @@ class Auth final {
*
* @throw AuthException if unable to save the role.
*/
void SaveRole(const Role &role);
void SaveRole(const Role &role, system::Transaction *system_tx = nullptr);
/**
* Creates a role if the role doesn't exist.
@ -127,7 +261,7 @@ class Auth final {
* @return a role when the role is created, nullopt if the role exists
* @throw AuthException if unable to save the role.
*/
std::optional<Role> AddRole(const std::string &rolename);
std::optional<Role> AddRole(const std::string &rolename, system::Transaction *system_tx = nullptr);
/**
* Removes a role from the storage.
@ -138,7 +272,7 @@ class Auth final {
* doesn't exist
* @throw AuthException if unable to remove the role.
*/
bool RemoveRole(const std::string &rolename);
bool RemoveRole(const std::string &rolename, system::Transaction *system_tx = nullptr);
/**
* Gets all roles from the storage.
@ -148,6 +282,13 @@ class Auth final {
*/
std::vector<Role> AllRoles() const;
/**
* @brief
*
* @return std::vector<std::string>
*/
std::vector<std::string> AllRolenames() const;
/**
* Gets all users for a role from the storage.
*
@ -159,16 +300,6 @@ class Auth final {
std::vector<User> AllUsersForRole(const std::string &rolename) const;
#ifdef MG_ENTERPRISE
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool RevokeDatabaseFromUser(const std::string &db, const std::string &name);
/**
* @brief Grant access to individual database for a user.
*
@ -177,7 +308,33 @@ class Auth final {
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool GrantDatabaseToUser(const std::string &db, const std::string &name);
Result GrantDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void GrantDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void GrantDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
Result DenyDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void DenyDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void DenyDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
Result RevokeDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void RevokeDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void RevokeDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Delete a database from all users.
@ -185,7 +342,7 @@ class Auth final {
* @param db name of the database to delete
* @throw AuthException if unable to read data
*/
void DeleteDatabase(const std::string &db);
void DeleteDatabase(const std::string &db, system::Transaction *system_tx = nullptr);
/**
* @brief Set main database for an individual user.
@ -195,14 +352,39 @@ class Auth final {
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool SetMainDatabase(std::string_view db, const std::string &name);
Result SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx = nullptr);
void SetMainDatabase(std::string_view db, User &user, system::Transaction *system_tx = nullptr);
void SetMainDatabase(std::string_view db, Role &role, system::Transaction *system_tx = nullptr);
#endif
bool UpToDate(Epoch &e) const {
bool res = e == epoch_;
e = epoch_;
return res;
}
private:
/**
* @brief
*
* @param user_or_role
* @return true
* @return false
*/
bool NameRegexMatch(const std::string &user_or_role) const;
void UpdateEpoch() { ++epoch_; }
void DisableIfModuleUsed() const {
if (module_.IsUsed()) throw AuthException("Operation not permited when using an authentication module.");
}
// Even though the `kvstore::KVStore` class is guaranteed to be thread-safe,
// Auth is not thread-safe because modifying users and roles might require
// more than one operation on the storage.
kvstore::KVStore storage_;
auth::Module module_;
Config config_;
Epoch epoch_{kStartEpoch};
};
} // namespace memgraph::auth

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -22,10 +22,14 @@
namespace {
using namespace std::literals;
inline constexpr std::array password_encryption_mappings{
std::pair{"bcrypt"sv, memgraph::auth::PasswordEncryptionAlgorithm::BCRYPT},
std::pair{"sha256"sv, memgraph::auth::PasswordEncryptionAlgorithm::SHA256},
std::pair{"sha256-multiple"sv, memgraph::auth::PasswordEncryptionAlgorithm::SHA256_MULTIPLE}};
constexpr auto kHashAlgo = "hash_algo";
constexpr auto kPasswordHash = "password_hash";
inline constexpr std::array password_hash_mappings{
std::pair{"bcrypt"sv, memgraph::auth::PasswordHashAlgorithm::BCRYPT},
std::pair{"sha256"sv, memgraph::auth::PasswordHashAlgorithm::SHA256},
std::pair{"sha256-multiple"sv, memgraph::auth::PasswordHashAlgorithm::SHA256_MULTIPLE}};
inline constexpr uint64_t ONE_SHA_ITERATION = 1;
inline constexpr uint64_t MULTIPLE_SHA_ITERATIONS = 1024;
@ -35,7 +39,7 @@ inline constexpr uint64_t MULTIPLE_SHA_ITERATIONS = 1024;
DEFINE_VALIDATED_string(password_encryption_algorithm, "bcrypt",
"The password encryption algorithm used for authentication.", {
if (const auto result =
memgraph::utils::IsValidEnumValueString(value, password_encryption_mappings);
memgraph::utils::IsValidEnumValueString(value, password_hash_mappings);
result.HasError()) {
const auto error = result.GetError();
switch (error) {
@ -45,7 +49,7 @@ DEFINE_VALIDATED_string(password_encryption_algorithm, "bcrypt",
}
case memgraph::utils::ValidationError::InvalidValue: {
std::cout << "Invalid value for password encryption algorithm. Allowed values: "
<< memgraph::utils::GetAllowedEnumValuesString(password_encryption_mappings)
<< memgraph::utils::GetAllowedEnumValuesString(password_hash_mappings)
<< std::endl;
break;
}
@ -58,7 +62,7 @@ DEFINE_VALIDATED_string(password_encryption_algorithm, "bcrypt",
namespace memgraph::auth {
namespace BCrypt {
std::string EncryptPassword(const std::string &password) {
std::string HashPassword(const std::string &password) {
char salt[BCRYPT_HASHSIZE];
char hash[BCRYPT_HASHSIZE];
@ -86,16 +90,30 @@ bool VerifyPassword(const std::string &password, const std::string &hash) {
} // namespace BCrypt
namespace SHA {
namespace {
constexpr auto SHA_LENGTH = 64U;
constexpr auto SALT_SIZE = 16U;
constexpr auto SALT_SIZE_DURABLE = SALT_SIZE * 2;
#if OPENSSL_VERSION_MAJOR >= 3
std::string EncryptPasswordOpenSSL3(const std::string &password, const uint64_t number_of_iterations) {
std::string HashPasswordOpenSSL3(std::string_view password, const uint64_t number_of_iterations,
std::string_view salt) {
unsigned char hash[SHA256_DIGEST_LENGTH];
EVP_MD_CTX *ctx = EVP_MD_CTX_new();
EVP_MD *md = EVP_MD_fetch(nullptr, "SHA2-256", nullptr);
EVP_DigestInit_ex(ctx, md, nullptr);
if (!salt.empty()) {
DMG_ASSERT(salt.size() == SALT_SIZE);
EVP_DigestUpdate(ctx, salt.data(), salt.size());
}
for (auto i = 0; i < number_of_iterations; i++) {
EVP_DigestUpdate(ctx, password.c_str(), password.size());
EVP_DigestUpdate(ctx, password.data(), password.size());
}
EVP_DigestFinal_ex(ctx, hash, nullptr);
@ -103,6 +121,11 @@ std::string EncryptPasswordOpenSSL3(const std::string &password, const uint64_t
EVP_MD_CTX_free(ctx);
std::stringstream result_stream;
for (unsigned char salt_char : salt) {
result_stream << std::hex << std::setw(2) << std::setfill('0') << (((unsigned int)salt_char) & 0xFFU);
}
for (auto hash_char : hash) {
result_stream << std::hex << std::setw(2) << std::setfill('0') << (int)hash_char;
}
@ -110,17 +133,27 @@ std::string EncryptPasswordOpenSSL3(const std::string &password, const uint64_t
return result_stream.str();
}
#else
std::string EncryptPasswordOpenSSL1_1(const std::string &password, const uint64_t number_of_iterations) {
std::string HashPasswordOpenSSL1_1(std::string_view password, const uint64_t number_of_iterations,
std::string_view salt) {
unsigned char hash[SHA256_DIGEST_LENGTH];
SHA256_CTX sha256;
SHA256_Init(&sha256);
if (!salt.empty()) {
DMG_ASSERT(salt.size() == SALT_SIZE);
SHA256_Update(&sha256, salt.data(), salt.size());
}
for (auto i = 0; i < number_of_iterations; i++) {
SHA256_Update(&sha256, password.c_str(), password.size());
SHA256_Update(&sha256, password.data(), password.size());
}
SHA256_Final(hash, &sha256);
std::stringstream ss;
for (unsigned char salt_char : salt) {
ss << std::hex << std::setw(2) << std::setfill('0') << (((unsigned int)salt_char) & 0xFFU);
}
for (auto hash_char : hash) {
ss << std::hex << std::setw(2) << std::setfill('0') << (int)hash_char;
}
@ -129,55 +162,144 @@ std::string EncryptPasswordOpenSSL1_1(const std::string &password, const uint64_
}
#endif
std::string EncryptPassword(const std::string &password, const uint64_t number_of_iterations) {
std::string HashPassword(std::string_view password, const uint64_t number_of_iterations, std::string_view salt) {
#if OPENSSL_VERSION_MAJOR >= 3
return EncryptPasswordOpenSSL3(password, number_of_iterations);
return HashPasswordOpenSSL3(password, number_of_iterations, salt);
#else
return EncryptPasswordOpenSSL1_1(password, number_of_iterations);
return HashPasswordOpenSSL1_1(password, number_of_iterations, salt);
#endif
}
bool VerifyPassword(const std::string &password, const std::string &hash, const uint64_t number_of_iterations) {
auto password_hash = EncryptPassword(password, number_of_iterations);
auto ExtractSalt(std::string_view salt_durable) -> std::array<char, SALT_SIZE> {
static_assert(SALT_SIZE_DURABLE % 2 == 0);
static_assert(SALT_SIZE_DURABLE / 2 == SALT_SIZE);
MG_ASSERT(salt_durable.size() == SALT_SIZE_DURABLE);
auto const *b = salt_durable.cbegin();
auto const *const e = salt_durable.cend();
auto salt = std::array<char, SALT_SIZE>{};
auto *inserter = salt.begin();
auto const toval = [](char a) -> uint8_t {
if ('0' <= a && a <= '9') {
return a - '0';
}
if ('a' <= a && a <= 'f') {
return 10 + (a - 'a');
}
MG_ASSERT(false, "Currupt hash, can't extract salt");
__builtin_unreachable();
};
for (; b != e; b += 2, ++inserter) {
*inserter = static_cast<char>(static_cast<uint8_t>(toval(b[0]) << 4U) | toval(b[1]));
}
return salt;
}
bool IsSalted(std::string_view hash) { return hash.size() == SHA_LENGTH + SALT_SIZE_DURABLE; }
bool VerifyPassword(std::string_view password, std::string_view hash, const uint64_t number_of_iterations) {
auto password_hash = std::invoke([&] {
if (hash.size() == SHA_LENGTH) [[unlikely]] {
// Just SHA256
return HashPassword(password, number_of_iterations, {});
} else {
// SHA256 + SALT
MG_ASSERT(IsSalted(hash));
auto const salt_durable = std::string_view{hash.data(), SALT_SIZE_DURABLE};
std::array<char, SALT_SIZE> salt = ExtractSalt(salt_durable);
return HashPassword(password, number_of_iterations, {salt.data(), salt.size()});
}
});
return password_hash == hash;
}
} // namespace
} // namespace SHA
bool VerifyPassword(const std::string &password, const std::string &hash) {
const auto password_encryption_algorithm = utils::StringToEnum<PasswordEncryptionAlgorithm>(
FLAGS_password_encryption_algorithm, password_encryption_mappings);
HashedPassword HashPassword(const std::string &password, std::optional<PasswordHashAlgorithm> override_algo) {
auto const hash_algo = override_algo.value_or(CurrentHashAlgorithm());
auto password_hash = std::invoke([&] {
switch (hash_algo) {
case PasswordHashAlgorithm::BCRYPT: {
return BCrypt::HashPassword(password);
}
case PasswordHashAlgorithm::SHA256:
case PasswordHashAlgorithm::SHA256_MULTIPLE: {
auto gen = std::mt19937(std::random_device{}());
auto salt = std::array<char, SHA::SALT_SIZE>{};
auto dis = std::uniform_int_distribution<unsigned char>(0, 255);
std::generate(salt.begin(), salt.end(), [&]() { return dis(gen); });
auto iterations = (hash_algo == PasswordHashAlgorithm::SHA256) ? ONE_SHA_ITERATION : MULTIPLE_SHA_ITERATIONS;
return SHA::HashPassword(password, iterations, {salt.data(), salt.size()});
}
}
});
return HashedPassword{hash_algo, std::move(password_hash)};
};
if (!password_encryption_algorithm.has_value()) {
throw AuthException("Invalid password encryption flag '{}'!", FLAGS_password_encryption_algorithm);
namespace {
auto InternalParseHashAlgorithm(std::string_view algo) -> PasswordHashAlgorithm {
auto maybe_parsed = utils::StringToEnum<PasswordHashAlgorithm>(algo, password_hash_mappings);
if (!maybe_parsed) {
throw AuthException("Invalid password encryption '{}'!", algo);
}
switch (password_encryption_algorithm.value()) {
case PasswordEncryptionAlgorithm::BCRYPT:
return BCrypt::VerifyPassword(password, hash);
case PasswordEncryptionAlgorithm::SHA256:
return SHA::VerifyPassword(password, hash, ONE_SHA_ITERATION);
case PasswordEncryptionAlgorithm::SHA256_MULTIPLE:
return SHA::VerifyPassword(password, hash, MULTIPLE_SHA_ITERATIONS);
}
throw AuthException("Invalid password encryption flag '{}'!", FLAGS_password_encryption_algorithm);
return *maybe_parsed;
}
std::string EncryptPassword(const std::string &password) {
const auto password_encryption_algorithm = utils::StringToEnum<PasswordEncryptionAlgorithm>(
FLAGS_password_encryption_algorithm, password_encryption_mappings);
PasswordHashAlgorithm &InternalCurrentHashAlgorithm() {
static auto current = PasswordHashAlgorithm::BCRYPT;
static std::once_flag flag;
std::call_once(flag, [] { current = InternalParseHashAlgorithm(FLAGS_password_encryption_algorithm); });
return current;
}
} // namespace
if (!password_encryption_algorithm.has_value()) {
throw AuthException("Invalid password encryption flag '{}'!", FLAGS_password_encryption_algorithm);
auto CurrentHashAlgorithm() -> PasswordHashAlgorithm { return InternalCurrentHashAlgorithm(); }
void SetHashAlgorithm(std::string_view algo) {
auto &current = InternalCurrentHashAlgorithm();
current = InternalParseHashAlgorithm(algo);
}
auto AsString(PasswordHashAlgorithm hash_algo) -> std::string_view {
return *utils::EnumToString<PasswordHashAlgorithm>(hash_algo, password_hash_mappings);
}
bool HashedPassword::VerifyPassword(const std::string &password) {
switch (hash_algo) {
case PasswordHashAlgorithm::BCRYPT:
return BCrypt::VerifyPassword(password, password_hash);
case PasswordHashAlgorithm::SHA256:
return SHA::VerifyPassword(password, password_hash, ONE_SHA_ITERATION);
case PasswordHashAlgorithm::SHA256_MULTIPLE:
return SHA::VerifyPassword(password, password_hash, MULTIPLE_SHA_ITERATIONS);
}
}
switch (password_encryption_algorithm.value()) {
case PasswordEncryptionAlgorithm::BCRYPT:
return BCrypt::EncryptPassword(password);
case PasswordEncryptionAlgorithm::SHA256:
return SHA::EncryptPassword(password, ONE_SHA_ITERATION);
case PasswordEncryptionAlgorithm::SHA256_MULTIPLE:
return SHA::EncryptPassword(password, MULTIPLE_SHA_ITERATIONS);
void to_json(nlohmann::json &j, const HashedPassword &p) {
j = nlohmann::json{{kHashAlgo, p.hash_algo}, {kPasswordHash, p.password_hash}};
}
void from_json(const nlohmann::json &j, HashedPassword &p) {
// NOLINTNEXTLINE(cppcoreguidelines-init-variables)
PasswordHashAlgorithm hash_algo;
j.at(kHashAlgo).get_to(hash_algo);
auto password_hash = j.value(kPasswordHash, std::string());
p = HashedPassword{hash_algo, std::move(password_hash)};
}
bool HashedPassword::IsSalted() const {
switch (hash_algo) {
case PasswordHashAlgorithm::BCRYPT:
return true;
case PasswordHashAlgorithm::SHA256:
case PasswordHashAlgorithm::SHA256_MULTIPLE:
return SHA::IsSalted(password_hash);
}
}

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -8,14 +8,47 @@
#pragma once
#include <cstdint>
#include <optional>
#include <string>
#include <json/json.hpp>
namespace memgraph::auth {
enum class PasswordEncryptionAlgorithm : uint8_t { BCRYPT, SHA256, SHA256_MULTIPLE };
/// Need to be stable, auth durability depends on this
enum class PasswordHashAlgorithm : uint8_t { BCRYPT = 0, SHA256 = 1, SHA256_MULTIPLE = 2 };
/// @throw AuthException if unable to encrypt the password.
std::string EncryptPassword(const std::string &password);
void SetHashAlgorithm(std::string_view algo);
/// @throw AuthException if unable to verify the password.
bool VerifyPassword(const std::string &password, const std::string &hash);
auto CurrentHashAlgorithm() -> PasswordHashAlgorithm;
auto AsString(PasswordHashAlgorithm hash_algo) -> std::string_view;
struct HashedPassword {
HashedPassword() = default;
HashedPassword(PasswordHashAlgorithm hash_algo, std::string password_hash)
: hash_algo{hash_algo}, password_hash{std::move(password_hash)} {}
HashedPassword(HashedPassword const &) = default;
HashedPassword(HashedPassword &&) = default;
HashedPassword &operator=(HashedPassword const &) = default;
HashedPassword &operator=(HashedPassword &&) = default;
friend bool operator==(HashedPassword const &, HashedPassword const &) = default;
bool VerifyPassword(const std::string &password);
bool IsSalted() const;
auto HashAlgo() const -> PasswordHashAlgorithm { return hash_algo; }
friend void to_json(nlohmann::json &j, const HashedPassword &p);
friend void from_json(const nlohmann::json &j, HashedPassword &p);
private:
PasswordHashAlgorithm hash_algo{PasswordHashAlgorithm::BCRYPT};
std::string password_hash{};
};
/// @throw AuthException if unable to hash the password.
HashedPassword HashPassword(const std::string &password, std::optional<PasswordHashAlgorithm> override_algo = {});
} // namespace memgraph::auth

View File

@ -9,7 +9,6 @@
#include "auth/models.hpp"
#include <cstdint>
#include <regex>
#include <utility>
#include <gflags/gflags.h>
@ -21,22 +20,26 @@
#include "query/constants.hpp"
#include "spdlog/spdlog.h"
#include "utils/cast.hpp"
#include "utils/logging.hpp"
#include "utils/settings.hpp"
#include "utils/string.hpp"
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_bool(auth_password_permit_null, true, "Set to false to disable null passwords.");
inline constexpr std::string_view default_password_regex = ".+";
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DEFINE_string(auth_password_strength_regex, default_password_regex.data(),
"The regular expression that should be used to match the entire "
"entered password to ensure its strength.");
namespace memgraph::auth {
namespace {
constexpr auto kRoleName = "rolename";
constexpr auto kPermissions = "permissions";
constexpr auto kGrants = "grants";
constexpr auto kDenies = "denies";
constexpr auto kUsername = "username";
constexpr auto kPasswordHash = "password_hash";
#ifdef MG_ENTERPRISE
constexpr auto kGlobalPermission = "global_permission";
constexpr auto kFineGrainedAccessHandler = "fine_grained_access_handler";
constexpr auto kAllowAll = "allow_all";
constexpr auto kDefault = "default";
constexpr auto kDatabases = "databases";
#endif
// Constant list of all available permissions.
const std::vector<Permission> kPermissionsAll = {Permission::MATCH,
Permission::CREATE,
@ -245,8 +248,9 @@ std::vector<Permission> Permissions::GetDenies() const {
nlohmann::json Permissions::Serialize() const {
nlohmann::json data = nlohmann::json::object();
data["grants"] = grants_;
data["denies"] = denies_;
data[kGrants] = grants_;
data[kDenies] = denies_;
return data;
}
@ -254,10 +258,10 @@ Permissions Permissions::Deserialize(const nlohmann::json &data) {
if (!data.is_object()) {
throw AuthException("Couldn't load permissions data!");
}
if (!data["grants"].is_number_unsigned() || !data["denies"].is_number_unsigned()) {
if (!data[kGrants].is_number_unsigned() || !data[kDenies].is_number_unsigned()) {
throw AuthException("Couldn't load permissions data!");
}
return Permissions{data["grants"], data["denies"]};
return Permissions{data[kGrants], data[kDenies]};
}
uint64_t Permissions::grants() const { return grants_; }
@ -319,8 +323,8 @@ nlohmann::json FineGrainedAccessPermissions::Serialize() const {
return {};
}
nlohmann::json data = nlohmann::json::object();
data["permissions"] = permissions_;
data["global_permission"] = global_permission_.has_value() ? global_permission_.value() : -1;
data[kPermissions] = permissions_;
data[kGlobalPermission] = global_permission_.has_value() ? global_permission_.value() : -1;
return data;
}
@ -333,13 +337,13 @@ FineGrainedAccessPermissions FineGrainedAccessPermissions::Deserialize(const nlo
}
std::optional<uint64_t> global_permission;
if (data["global_permission"].empty() || data["global_permission"] == -1) {
if (data[kGlobalPermission].empty() || data[kGlobalPermission] == -1) {
global_permission = std::nullopt;
} else {
global_permission = data["global_permission"];
global_permission = data[kGlobalPermission];
}
return FineGrainedAccessPermissions(data["permissions"], global_permission);
return FineGrainedAccessPermissions(data[kPermissions], global_permission);
}
const std::unordered_map<std::string, uint64_t> &FineGrainedAccessPermissions::GetPermissions() const {
@ -421,10 +425,11 @@ Role::Role(const std::string &rolename, const Permissions &permissions)
: rolename_(utils::ToLowerCase(rolename)), permissions_(permissions) {}
#ifdef MG_ENTERPRISE
Role::Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler)
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access)
: rolename_(utils::ToLowerCase(rolename)),
permissions_(permissions),
fine_grained_access_handler_(std::move(fine_grained_access_handler)) {}
fine_grained_access_handler_(std::move(fine_grained_access_handler)),
db_access_(std::move(db_access)) {}
#endif
const std::string &Role::rolename() const { return rolename_; }
@ -445,13 +450,15 @@ const FineGrainedAccessPermissions &Role::GetFineGrainedAccessEdgeTypePermission
nlohmann::json Role::Serialize() const {
nlohmann::json data = nlohmann::json::object();
data["rolename"] = rolename_;
data["permissions"] = permissions_.Serialize();
data[kRoleName] = rolename_;
data[kPermissions] = permissions_.Serialize();
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
data["fine_grained_access_handler"] = fine_grained_access_handler_.Serialize();
data[kFineGrainedAccessHandler] = fine_grained_access_handler_.Serialize();
data[kDatabases] = db_access_.Serialize();
} else {
data["fine_grained_access_handler"] = {};
data[kFineGrainedAccessHandler] = {};
data[kDatabases] = {};
}
#endif
return data;
@ -461,21 +468,30 @@ Role Role::Deserialize(const nlohmann::json &data) {
if (!data.is_object()) {
throw AuthException("Couldn't load role data!");
}
if (!data["rolename"].is_string() || !data["permissions"].is_object()) {
if (!data[kRoleName].is_string() || !data[kPermissions].is_object()) {
throw AuthException("Couldn't load role data!");
}
auto permissions = Permissions::Deserialize(data["permissions"]);
auto permissions = Permissions::Deserialize(data[kPermissions]);
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
Databases db_access;
if (data[kDatabases].is_structured()) {
db_access = Databases::Deserialize(data[kDatabases]);
} else {
// Back-compatibility
spdlog::warn("Role without specified database access. Given access to the default database.");
db_access.Grant(dbms::kDefaultDB);
db_access.SetMain(dbms::kDefaultDB);
}
FineGrainedAccessHandler fine_grained_access_handler;
// We can have an empty fine_grained if the user was created without a valid license
if (data["fine_grained_access_handler"].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data["fine_grained_access_handler"]);
if (data[kFineGrainedAccessHandler].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data[kFineGrainedAccessHandler]);
}
return {data["rolename"], permissions, std::move(fine_grained_access_handler)};
return {data[kRoleName], permissions, std::move(fine_grained_access_handler), std::move(db_access)};
}
#endif
return {data["rolename"], permissions};
return {data[kRoleName], permissions};
}
bool operator==(const Role &first, const Role &second) {
@ -489,7 +505,7 @@ bool operator==(const Role &first, const Role &second) {
}
#ifdef MG_ENTERPRISE
void Databases::Add(std::string_view db) {
void Databases::Grant(std::string_view db) {
if (allow_all_) {
grants_dbs_.clear();
allow_all_ = false;
@ -498,19 +514,19 @@ void Databases::Add(std::string_view db) {
denies_dbs_.erase(std::string{db}); // TODO: C++23 use transparent key compare
}
void Databases::Remove(const std::string &db) {
void Databases::Deny(const std::string &db) {
denies_dbs_.emplace(db);
grants_dbs_.erase(db);
}
void Databases::Delete(const std::string &db) {
void Databases::Revoke(const std::string &db) {
denies_dbs_.erase(db);
if (!allow_all_) {
grants_dbs_.erase(db);
}
// Reset if default deleted
if (default_db_ == db) {
default_db_ = "";
if (main_db_ == db) {
main_db_ = "";
}
}
@ -526,9 +542,16 @@ void Databases::DenyAll() {
denies_dbs_.clear();
}
bool Databases::SetDefault(std::string_view db) {
void Databases::RevokeAll() {
allow_all_ = false;
grants_dbs_.clear();
denies_dbs_.clear();
main_db_ = "";
}
bool Databases::SetMain(std::string_view db) {
if (!Contains(db)) return false;
default_db_ = db;
main_db_ = db;
return true;
}
@ -536,19 +559,19 @@ bool Databases::SetDefault(std::string_view db) {
return !denies_dbs_.contains(db) && (allow_all_ || grants_dbs_.contains(db));
}
const std::string &Databases::GetDefault() const {
if (!Contains(default_db_)) {
throw AuthException("No access to the set default database \"{}\".", default_db_);
const std::string &Databases::GetMain() const {
if (!Contains(main_db_)) {
throw AuthException("No access to the set default database \"{}\".", main_db_);
}
return default_db_;
return main_db_;
}
nlohmann::json Databases::Serialize() const {
nlohmann::json data = nlohmann::json::object();
data["grants"] = grants_dbs_;
data["denies"] = denies_dbs_;
data["allow_all"] = allow_all_;
data["default"] = default_db_;
data[kGrants] = grants_dbs_;
data[kDenies] = denies_dbs_;
data[kAllowAll] = allow_all_;
data[kDefault] = main_db_;
return data;
}
@ -556,22 +579,22 @@ Databases Databases::Deserialize(const nlohmann::json &data) {
if (!data.is_object()) {
throw AuthException("Couldn't load database data!");
}
if (!data["grants"].is_structured() || !data["denies"].is_structured() || !data["allow_all"].is_boolean() ||
!data["default"].is_string()) {
if (!data[kGrants].is_structured() || !data[kDenies].is_structured() || !data[kAllowAll].is_boolean() ||
!data[kDefault].is_string()) {
throw AuthException("Couldn't load database data!");
}
return {data["allow_all"], data["grants"], data["denies"], data["default"]};
return {data[kAllowAll], data[kGrants], data[kDenies], data[kDefault]};
}
#endif
User::User() = default;
User::User(const std::string &username) : username_(utils::ToLowerCase(username)) {}
User::User(const std::string &username, std::string password_hash, const Permissions &permissions)
User::User(const std::string &username, std::optional<HashedPassword> password_hash, const Permissions &permissions)
: username_(utils::ToLowerCase(username)), password_hash_(std::move(password_hash)), permissions_(permissions) {}
#ifdef MG_ENTERPRISE
User::User(const std::string &username, std::string password_hash, const Permissions &permissions,
User::User(const std::string &username, std::optional<HashedPassword> password_hash, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access)
: username_(utils::ToLowerCase(username)),
password_hash_(std::move(password_hash)),
@ -581,38 +604,16 @@ User::User(const std::string &username, std::string password_hash, const Permiss
#endif
bool User::CheckPassword(const std::string &password) {
if (password_hash_.empty()) return true;
return VerifyPassword(password, password_hash_);
return password_hash_ ? password_hash_->VerifyPassword(password) : true;
}
void User::UpdatePassword(const std::optional<std::string> &password) {
void User::UpdatePassword(const std::optional<std::string> &password,
std::optional<PasswordHashAlgorithm> algo_override) {
if (!password) {
if (!FLAGS_auth_password_permit_null) {
throw AuthException("Null passwords aren't permitted!");
}
password_hash_ = "";
password_hash_.reset();
return;
}
if (FLAGS_auth_password_strength_regex != default_password_regex) {
if (const auto license_check_result = license::global_license_checker.IsEnterpriseValid(utils::global_settings);
license_check_result.HasError()) {
throw AuthException(
"Custom password regex is a Memgraph Enterprise feature. Please set the config "
"(\"--auth-password-strength-regex\") to its default value (\"{}\") or remove the flag.\n{}",
default_password_regex,
license::LicenseCheckErrorToString(license_check_result.GetError(), "password regex"));
}
}
std::regex re(FLAGS_auth_password_strength_regex);
if (!std::regex_match(*password, re)) {
throw AuthException(
"The user password doesn't conform to the required strength! Regex: "
"\"{}\"",
FLAGS_auth_password_strength_regex);
}
password_hash_ = EncryptPassword(*password);
password_hash_ = HashPassword(*password, algo_override);
}
void User::SetRole(const Role &role) { role_.emplace(role); }
@ -629,27 +630,49 @@ Permissions User::GetPermissions() const {
#ifdef MG_ENTERPRISE
FineGrainedAccessPermissions User::GetFineGrainedAccessLabelPermissions() const {
return Merge(GetUserFineGrainedAccessLabelPermissions(), GetRoleFineGrainedAccessLabelPermissions());
}
FineGrainedAccessPermissions User::GetFineGrainedAccessEdgeTypePermissions() const {
return Merge(GetUserFineGrainedAccessEdgeTypePermissions(), GetRoleFineGrainedAccessEdgeTypePermissions());
}
FineGrainedAccessPermissions User::GetUserFineGrainedAccessEdgeTypePermissions() const {
if (!memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
return FineGrainedAccessPermissions{};
}
if (role_) {
return Merge(role()->fine_grained_access_handler().label_permissions(),
fine_grained_access_handler_.label_permissions());
return fine_grained_access_handler_.edge_type_permissions();
}
FineGrainedAccessPermissions User::GetUserFineGrainedAccessLabelPermissions() const {
if (!memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
return FineGrainedAccessPermissions{};
}
return fine_grained_access_handler_.label_permissions();
}
FineGrainedAccessPermissions User::GetFineGrainedAccessEdgeTypePermissions() const {
FineGrainedAccessPermissions User::GetRoleFineGrainedAccessEdgeTypePermissions() const {
if (!memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
return FineGrainedAccessPermissions{};
}
if (role_) {
return Merge(role()->fine_grained_access_handler().edge_type_permissions(),
fine_grained_access_handler_.edge_type_permissions());
return role()->fine_grained_access_handler().edge_type_permissions();
}
return fine_grained_access_handler_.edge_type_permissions();
return FineGrainedAccessPermissions{};
}
FineGrainedAccessPermissions User::GetRoleFineGrainedAccessLabelPermissions() const {
if (!memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
return FineGrainedAccessPermissions{};
}
if (role_) {
return role()->fine_grained_access_handler().label_permissions();
}
return FineGrainedAccessPermissions{};
}
#endif
@ -671,16 +694,20 @@ const Role *User::role() const {
nlohmann::json User::Serialize() const {
nlohmann::json data = nlohmann::json::object();
data["username"] = username_;
data["password_hash"] = password_hash_;
data["permissions"] = permissions_.Serialize();
data[kUsername] = username_;
if (password_hash_.has_value()) {
data[kPasswordHash] = *password_hash_;
} else {
data[kPasswordHash] = nullptr;
}
data[kPermissions] = permissions_.Serialize();
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
data["fine_grained_access_handler"] = fine_grained_access_handler_.Serialize();
data["databases"] = database_access_.Serialize();
data[kFineGrainedAccessHandler] = fine_grained_access_handler_.Serialize();
data[kDatabases] = database_access_.Serialize();
} else {
data["fine_grained_access_handler"] = {};
data["databases"] = {};
data[kFineGrainedAccessHandler] = {};
data[kDatabases] = {};
}
#endif
// The role shouldn't be serialized here, it is stored as a foreign key.
@ -691,30 +718,39 @@ User User::Deserialize(const nlohmann::json &data) {
if (!data.is_object()) {
throw AuthException("Couldn't load user data!");
}
if (!data["username"].is_string() || !data["password_hash"].is_string() || !data["permissions"].is_object()) {
auto password_hash_json = data[kPasswordHash];
if (!data[kUsername].is_string() || !(password_hash_json.is_object() || password_hash_json.is_null()) ||
!data[kPermissions].is_object()) {
throw AuthException("Couldn't load user data!");
}
auto permissions = Permissions::Deserialize(data["permissions"]);
std::optional<HashedPassword> password_hash{};
if (password_hash_json.is_object()) {
password_hash = password_hash_json.get<HashedPassword>();
}
auto permissions = Permissions::Deserialize(data[kPermissions]);
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
Databases db_access;
if (data["databases"].is_structured()) {
db_access = Databases::Deserialize(data["databases"]);
if (data[kDatabases].is_structured()) {
db_access = Databases::Deserialize(data[kDatabases]);
} else {
// Back-compatibility
spdlog::warn("User without specified database access. Given access to the default database.");
db_access.Add(dbms::kDefaultDB);
db_access.SetDefault(dbms::kDefaultDB);
db_access.Grant(dbms::kDefaultDB);
db_access.SetMain(dbms::kDefaultDB);
}
FineGrainedAccessHandler fine_grained_access_handler;
// We can have an empty fine_grained if the user was created without a valid license
if (data["fine_grained_access_handler"].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data["fine_grained_access_handler"]);
if (data[kFineGrainedAccessHandler].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data[kFineGrainedAccessHandler]);
}
return {data["username"], data["password_hash"], permissions, std::move(fine_grained_access_handler), db_access};
return {data[kUsername], std::move(password_hash), permissions, std::move(fine_grained_access_handler),
std::move(db_access)};
}
#endif
return {data["username"], data["password_hash"], permissions};
return {data[kUsername], std::move(password_hash), permissions};
}
bool operator==(const User &first, const User &second) {

View File

@ -15,6 +15,7 @@
#include <json/json.hpp>
#include <utility>
#include "crypto.hpp"
#include "dbms/constants.hpp"
#include "utils/logging.hpp"
@ -204,50 +205,10 @@ class FineGrainedAccessHandler final {
bool operator==(const FineGrainedAccessHandler &first, const FineGrainedAccessHandler &second);
#endif
class Role final {
public:
explicit Role(const std::string &rolename);
Role(const std::string &rolename, const Permissions &permissions);
#ifdef MG_ENTERPRISE
Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler);
#endif
Role(const Role &) = default;
Role &operator=(const Role &) = default;
Role(Role &&) noexcept = default;
Role &operator=(Role &&) noexcept = default;
~Role() = default;
const std::string &rolename() const;
const Permissions &permissions() const;
Permissions &permissions();
#ifdef MG_ENTERPRISE
const FineGrainedAccessHandler &fine_grained_access_handler() const;
FineGrainedAccessHandler &fine_grained_access_handler();
const FineGrainedAccessPermissions &GetFineGrainedAccessLabelPermissions() const;
const FineGrainedAccessPermissions &GetFineGrainedAccessEdgeTypePermissions() const;
#endif
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
static Role Deserialize(const nlohmann::json &data);
friend bool operator==(const Role &first, const Role &second);
private:
std::string rolename_;
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
#endif
};
bool operator==(const Role &first, const Role &second);
#ifdef MG_ENTERPRISE
class Databases final {
public:
Databases() : grants_dbs_{std::string{dbms::kDefaultDB}}, allow_all_(false), default_db_(dbms::kDefaultDB) {}
Databases() : grants_dbs_{std::string{dbms::kDefaultDB}}, allow_all_(false), main_db_(dbms::kDefaultDB) {}
Databases(const Databases &) = default;
Databases &operator=(const Databases &) = default;
@ -260,7 +221,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Add(std::string_view db);
void Grant(std::string_view db);
/**
* @brief Remove database to the list of granted access.
@ -269,7 +230,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Remove(const std::string &db);
void Deny(const std::string &db);
/**
* @brief Called when database is dropped. Removes it from granted (if allow_all is false) and denied set.
@ -277,7 +238,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Delete(const std::string &db);
void Revoke(const std::string &db);
/**
* @brief Set allow_all_ to true and clears grants and denied sets.
@ -289,10 +250,15 @@ class Databases final {
*/
void DenyAll();
/**
* @brief Set allow_all_ to false and clears grants and denied sets.
*/
void RevokeAll();
/**
* @brief Set the default database.
*/
bool SetDefault(std::string_view db);
bool SetMain(std::string_view db);
/**
* @brief Checks if access is grated to the database.
@ -301,11 +267,13 @@ class Databases final {
* @return true if allow_all and not denied or granted
*/
bool Contains(std::string_view db) const;
bool Denies(std::string_view db_name) const { return denies_dbs_.contains(db_name); }
bool Grants(std::string_view db_name) const { return allow_all_ || grants_dbs_.contains(db_name); }
bool GetAllowAll() const { return allow_all_; }
const std::set<std::string, std::less<>> &GetGrants() const { return grants_dbs_; }
const std::set<std::string, std::less<>> &GetDenies() const { return denies_dbs_; }
const std::string &GetDefault() const;
const std::string &GetMain() const;
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
@ -317,24 +285,78 @@ class Databases final {
: grants_dbs_(std::move(grant)),
denies_dbs_(std::move(deny)),
allow_all_(allow_all),
default_db_(std::move(default_db)) {}
main_db_(std::move(default_db)) {}
std::set<std::string, std::less<>> grants_dbs_; //!< set of databases with granted access
std::set<std::string, std::less<>> denies_dbs_; //!< set of databases with denied access
bool allow_all_; //!< flag to allow access to everything (denied overrides this)
std::string default_db_; //!< user's default database
std::string main_db_; //!< user's default database
};
#endif
class Role {
public:
Role() = default;
explicit Role(const std::string &rolename);
Role(const std::string &rolename, const Permissions &permissions);
#ifdef MG_ENTERPRISE
Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access = {});
#endif
Role(const Role &) = default;
Role &operator=(const Role &) = default;
Role(Role &&) noexcept = default;
Role &operator=(Role &&) noexcept = default;
~Role() = default;
const std::string &rolename() const;
const Permissions &permissions() const;
Permissions &permissions();
Permissions GetPermissions() const { return permissions_; }
#ifdef MG_ENTERPRISE
const FineGrainedAccessHandler &fine_grained_access_handler() const;
FineGrainedAccessHandler &fine_grained_access_handler();
const FineGrainedAccessPermissions &GetFineGrainedAccessLabelPermissions() const;
const FineGrainedAccessPermissions &GetFineGrainedAccessEdgeTypePermissions() const;
#endif
#ifdef MG_ENTERPRISE
Databases &db_access() { return db_access_; }
const Databases &db_access() const { return db_access_; }
bool DeniesDB(std::string_view db_name) const { return db_access_.Denies(db_name); }
bool GrantsDB(std::string_view db_name) const { return db_access_.Grants(db_name); }
bool HasAccess(std::string_view db_name) const { return !DeniesDB(db_name) && GrantsDB(db_name); }
#endif
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
static Role Deserialize(const nlohmann::json &data);
friend bool operator==(const Role &first, const Role &second);
private:
std::string rolename_;
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
Databases db_access_;
#endif
};
bool operator==(const Role &first, const Role &second);
// TODO (mferencevic): Implement password expiry.
class User final {
public:
User();
explicit User(const std::string &username);
User(const std::string &username, std::string password_hash, const Permissions &permissions);
User(const std::string &username, std::optional<HashedPassword> password_hash, const Permissions &permissions);
#ifdef MG_ENTERPRISE
User(const std::string &username, std::string password_hash, const Permissions &permissions,
User(const std::string &username, std::optional<HashedPassword> password_hash, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access = {});
#endif
User(const User &) = default;
@ -346,8 +368,18 @@ class User final {
/// @throw AuthException if unable to verify the password.
bool CheckPassword(const std::string &password);
bool UpgradeHash(const std::string password) {
if (!password_hash_) return false;
if (password_hash_->IsSalted()) return false;
auto const algo = password_hash_->HashAlgo();
UpdatePassword(password, algo);
return true;
}
/// @throw AuthException if unable to set the password.
void UpdatePassword(const std::optional<std::string> &password = std::nullopt);
void UpdatePassword(const std::optional<std::string> &password = {},
std::optional<PasswordHashAlgorithm> algo_override = std::nullopt);
void SetRole(const Role &role);
@ -358,6 +390,10 @@ class User final {
#ifdef MG_ENTERPRISE
FineGrainedAccessPermissions GetFineGrainedAccessLabelPermissions() const;
FineGrainedAccessPermissions GetFineGrainedAccessEdgeTypePermissions() const;
FineGrainedAccessPermissions GetUserFineGrainedAccessLabelPermissions() const;
FineGrainedAccessPermissions GetUserFineGrainedAccessEdgeTypePermissions() const;
FineGrainedAccessPermissions GetRoleFineGrainedAccessLabelPermissions() const;
FineGrainedAccessPermissions GetRoleFineGrainedAccessEdgeTypePermissions() const;
const FineGrainedAccessHandler &fine_grained_access_handler() const;
FineGrainedAccessHandler &fine_grained_access_handler();
#endif
@ -371,6 +407,18 @@ class User final {
#ifdef MG_ENTERPRISE
Databases &db_access() { return database_access_; }
const Databases &db_access() const { return database_access_; }
bool DeniesDB(std::string_view db_name) const {
bool denies = database_access_.Denies(db_name);
if (role_) denies |= role_->DeniesDB(db_name);
return denies;
}
bool GrantsDB(std::string_view db_name) const {
bool grants = database_access_.Grants(db_name);
if (role_) grants |= role_->GrantsDB(db_name);
return grants;
}
bool HasAccess(std::string_view db_name) const { return !DeniesDB(db_name) && GrantsDB(db_name); }
#endif
nlohmann::json Serialize() const;
@ -382,11 +430,11 @@ class User final {
private:
std::string username_;
std::string password_hash_;
std::optional<HashedPassword> password_hash_;
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
Databases database_access_;
Databases database_access_{};
#endif
std::optional<Role> role_;
};

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -403,7 +403,7 @@ nlohmann::json Module::Call(const nlohmann::json &params, int timeout_millisec)
return ret;
}
bool Module::IsUsed() { return !module_executable_path_.empty(); }
bool Module::IsUsed() const { return !module_executable_path_.empty(); }
void Module::Shutdown() {
if (pid_ == -1) return;

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -49,7 +49,7 @@ class Module final {
/// specified executable path and can thus be used.
///
/// @return boolean indicating whether the module can be used
bool IsUsed();
bool IsUsed() const;
~Module();

View File

@ -0,0 +1,190 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "auth/replication_handlers.hpp"
#include "auth/auth.hpp"
#include "auth/rpc.hpp"
#include "license/license.hpp"
namespace memgraph::auth {
void LogWrongMain(const std::optional<utils::UUID> &current_main_uuid, const utils::UUID &main_req_id,
std::string_view rpc_req) {
spdlog::error(fmt::format("Received {} with main_id: {} != current_main_uuid: {}", rpc_req, std::string(main_req_id),
current_main_uuid.has_value() ? std::string(current_main_uuid.value()) : ""));
}
#ifdef MG_ENTERPRISE
void UpdateAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder) {
replication::UpdateAuthDataReq req;
memgraph::slk::Load(&req, req_reader);
using memgraph::replication::UpdateAuthDataRes;
UpdateAuthDataRes res(false);
if (!current_main_uuid.has_value() || req.main_uuid != current_main_uuid) [[unlikely]] {
LogWrongMain(current_main_uuid, req.main_uuid, replication::UpdateAuthDataReq::kType.name);
memgraph::slk::Save(res, res_builder);
return;
}
// Note: No need to check epoch, recovery mechanism is done by a full uptodate snapshot
// of the set of databases. Hence no history exists to maintain regarding epoch change.
// If MAIN has changed we need to check this new group_timestamp is consistent with
// what we have so far.
if (req.expected_group_timestamp != system_state_access.LastCommitedTS()) {
spdlog::debug("UpdateAuthDataHandler: bad expected timestamp {},{}", req.expected_group_timestamp,
system_state_access.LastCommitedTS());
memgraph::slk::Save(res, res_builder);
return;
}
try {
// Update
if (req.user) auth->SaveUser(*req.user);
if (req.role) auth->SaveRole(*req.role);
// Success
system_state_access.SetLastCommitedTS(req.new_group_timestamp);
res = UpdateAuthDataRes(true);
spdlog::debug("UpdateAuthDataHandler: SUCCESS updated LCTS to {}", req.new_group_timestamp);
} catch (const auth::AuthException & /* not used */) {
// Failure
}
memgraph::slk::Save(res, res_builder);
}
void DropAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder) {
replication::DropAuthDataReq req;
memgraph::slk::Load(&req, req_reader);
using memgraph::replication::DropAuthDataRes;
DropAuthDataRes res(false);
if (!current_main_uuid.has_value() || req.main_uuid != current_main_uuid) [[unlikely]] {
LogWrongMain(current_main_uuid, req.main_uuid, replication::DropAuthDataRes::kType.name);
memgraph::slk::Save(res, res_builder);
return;
}
// Note: No need to check epoch, recovery mechanism is done by a full uptodate snapshot
// of the set of databases. Hence no history exists to maintain regarding epoch change.
// If MAIN has changed we need to check this new group_timestamp is consistent with
// what we have so far.
if (req.expected_group_timestamp != system_state_access.LastCommitedTS()) {
spdlog::debug("DropAuthDataHandler: bad expected timestamp {},{}", req.expected_group_timestamp,
system_state_access.LastCommitedTS());
memgraph::slk::Save(res, res_builder);
return;
}
try {
// Remove
switch (req.type) {
case replication::DropAuthDataReq::DataType::USER:
auth->RemoveUser(req.name);
break;
case replication::DropAuthDataReq::DataType::ROLE:
auth->RemoveRole(req.name);
break;
}
// Success
system_state_access.SetLastCommitedTS(req.new_group_timestamp);
res = DropAuthDataRes(true);
spdlog::debug("DropAuthDataHandler: SUCCESS updated LCTS to {}", req.new_group_timestamp);
} catch (const auth::AuthException & /* not used */) {
// Failure
}
memgraph::slk::Save(res, res_builder);
}
bool SystemRecoveryHandler(auth::SynchedAuth &auth, auth::Auth::Config auth_config,
const std::vector<auth::User> &users, const std::vector<auth::Role> &roles) {
return auth.WithLock([&](auto &locked_auth) {
// Update config
locked_auth.SetConfig(std::move(auth_config));
// Get all current users
auto old_users = locked_auth.AllUsernames();
// Save incoming users
for (const auto &user : users) {
// Missing users
try {
locked_auth.SaveUser(user);
} catch (const auth::AuthException &) {
spdlog::debug("SystemRecoveryHandler: Failed to save user");
return false;
}
const auto it = std::find(old_users.begin(), old_users.end(), user.username());
if (it != old_users.end()) old_users.erase(it);
}
// Delete all the leftover users
for (const auto &user : old_users) {
if (!locked_auth.RemoveUser(user)) {
spdlog::debug("SystemRecoveryHandler: Failed to remove user \"{}\".", user);
return false;
}
}
// Roles are only supported with a license
if (license::global_license_checker.IsEnterpriseValidFast()) {
// Get all current roles
auto old_roles = locked_auth.AllRolenames();
// Save incoming users
for (const auto &role : roles) {
// Missing users
try {
locked_auth.SaveRole(role);
} catch (const auth::AuthException &) {
spdlog::debug("SystemRecoveryHandler: Failed to save user");
return false;
}
const auto it = std::find(old_roles.begin(), old_roles.end(), role.rolename());
if (it != old_roles.end()) old_roles.erase(it);
}
// Delete all the leftover users
for (const auto &role : old_roles) {
if (!locked_auth.RemoveRole(role)) {
spdlog::debug("SystemRecoveryHandler: Failed to remove user \"{}\".", role);
return false;
}
}
}
// Success
return true;
});
}
void Register(replication::RoleReplicaData const &data, system::ReplicaHandlerAccessToState &system_state_access,
auth::SynchedAuth &auth) {
// NOTE: Register even without license as the user could add a license at run-time
data.server->rpc_server_.Register<replication::UpdateAuthDataRpc>(
[&data, system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
spdlog::debug("Received UpdateAuthDataRpc");
UpdateAuthDataHandler(system_state_access, data.uuid_, auth, req_reader, res_builder);
});
data.server->rpc_server_.Register<replication::DropAuthDataRpc>(
[&data, system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
spdlog::debug("Received DropAuthDataRpc");
DropAuthDataHandler(system_state_access, data.uuid_, auth, req_reader, res_builder);
});
}
#endif
} // namespace memgraph::auth

View File

@ -0,0 +1,37 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#include "auth/auth.hpp"
#include "replication/state.hpp"
#include "slk/streams.hpp"
#include "system/state.hpp"
namespace memgraph::auth {
void LogWrongMain(const std::optional<utils::UUID> &current_main_uuid, const utils::UUID &main_req_id,
std::string_view rpc_req);
#ifdef MG_ENTERPRISE
void UpdateAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder);
void DropAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder);
bool SystemRecoveryHandler(auth::SynchedAuth &auth, auth::Auth::Config auth_config,
const std::vector<auth::User> &users, const std::vector<auth::Role> &roles);
void Register(replication::RoleReplicaData const &data, system::ReplicaHandlerAccessToState &system_state_access,
auth::SynchedAuth &auth);
#endif
} // namespace memgraph::auth

180
src/auth/rpc.cpp Normal file
View File

@ -0,0 +1,180 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "auth/rpc.hpp"
#include <json/json.hpp>
#include "auth/auth.hpp"
#include "slk/serialization.hpp"
#include "slk/streams.hpp"
#include "utils/enum.hpp"
namespace memgraph::slk {
// Serialize code for auth::Role
void Save(const auth::Role &self, Builder *builder) { memgraph::slk::Save(self.Serialize().dump(), builder); }
namespace {
auth::Role LoadAuthRole(memgraph::slk::Reader *reader) {
std::string tmp;
memgraph::slk::Load(&tmp, reader);
const auto json = nlohmann::json::parse(tmp);
return memgraph::auth::Role::Deserialize(json);
}
} // namespace
// Deserialize code for auth::Role
void Load(auth::Role *self, memgraph::slk::Reader *reader) { *self = LoadAuthRole(reader); }
// Special case for optional<Role>
template <>
inline void Load<auth::Role>(std::optional<auth::Role> *obj, Reader *reader) {
bool exists = false;
Load(&exists, reader);
if (exists) {
obj->emplace(LoadAuthRole(reader));
} else {
*obj = std::nullopt;
}
}
// Serialize code for auth::User
void Save(const auth::User &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.Serialize().dump(), builder);
std::optional<auth::Role> role{};
if (const auto *role_ptr = self.role(); role_ptr) {
role.emplace(*role_ptr);
}
memgraph::slk::Save(role, builder);
}
// Deserialize code for auth::User
void Load(auth::User *self, memgraph::slk::Reader *reader) {
std::string tmp;
memgraph::slk::Load(&tmp, reader);
const auto json = nlohmann::json::parse(tmp);
*self = memgraph::auth::User::Deserialize(json);
std::optional<auth::Role> role{};
memgraph::slk::Load(&role, reader);
if (role)
self->SetRole(*role);
else
self->ClearRole();
}
// Serialize code for auth::Auth::Config
void Save(const auth::Auth::Config &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.name_regex_str, builder);
memgraph::slk::Save(self.password_regex_str, builder);
memgraph::slk::Save(self.password_permit_null, builder);
}
// Deserialize code for auth::Auth::Config
void Load(auth::Auth::Config *self, memgraph::slk::Reader *reader) {
std::string name_regex_str{};
std::string password_regex_str{};
bool password_permit_null{};
memgraph::slk::Load(&name_regex_str, reader);
memgraph::slk::Load(&password_regex_str, reader);
memgraph::slk::Load(&password_permit_null, reader);
*self = auth::Auth::Config{std::move(name_regex_str), std::move(password_regex_str), password_permit_null};
}
// Serialize code for UpdateAuthDataReq
void Save(const memgraph::replication::UpdateAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.main_uuid, builder);
memgraph::slk::Save(self.epoch_id, builder);
memgraph::slk::Save(self.expected_group_timestamp, builder);
memgraph::slk::Save(self.new_group_timestamp, builder);
memgraph::slk::Save(self.user, builder);
memgraph::slk::Save(self.role, builder);
}
void Load(memgraph::replication::UpdateAuthDataReq *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->main_uuid, reader);
memgraph::slk::Load(&self->epoch_id, reader);
memgraph::slk::Load(&self->expected_group_timestamp, reader);
memgraph::slk::Load(&self->new_group_timestamp, reader);
memgraph::slk::Load(&self->user, reader);
memgraph::slk::Load(&self->role, reader);
}
// Serialize code for UpdateAuthDataRes
void Save(const memgraph::replication::UpdateAuthDataRes &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.success, builder);
}
void Load(memgraph::replication::UpdateAuthDataRes *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->success, reader);
}
// Serialize code for DropAuthDataReq
void Save(const memgraph::replication::DropAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.main_uuid, builder);
memgraph::slk::Save(self.epoch_id, builder);
memgraph::slk::Save(self.expected_group_timestamp, builder);
memgraph::slk::Save(self.new_group_timestamp, builder);
memgraph::slk::Save(utils::EnumToNum<2, uint8_t>(self.type), builder);
memgraph::slk::Save(self.name, builder);
}
void Load(memgraph::replication::DropAuthDataReq *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->main_uuid, reader);
memgraph::slk::Load(&self->epoch_id, reader);
memgraph::slk::Load(&self->expected_group_timestamp, reader);
memgraph::slk::Load(&self->new_group_timestamp, reader);
uint8_t type_tmp = 0;
memgraph::slk::Load(&type_tmp, reader);
if (!utils::NumToEnum<2>(type_tmp, self->type)) {
throw SlkReaderException("Unexpected result line:{}!", __LINE__);
}
memgraph::slk::Load(&self->name, reader);
}
// Serialize code for DropAuthDataRes
void Save(const memgraph::replication::DropAuthDataRes &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.success, builder);
}
void Load(memgraph::replication::DropAuthDataRes *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->success, reader);
}
} // namespace memgraph::slk
namespace memgraph::replication {
constexpr utils::TypeInfo UpdateAuthDataReq::kType{utils::TypeId::REP_UPDATE_AUTH_DATA_REQ, "UpdateAuthDataReq",
nullptr};
constexpr utils::TypeInfo UpdateAuthDataRes::kType{utils::TypeId::REP_UPDATE_AUTH_DATA_RES, "UpdateAuthDataRes",
nullptr};
constexpr utils::TypeInfo DropAuthDataReq::kType{utils::TypeId::REP_DROP_AUTH_DATA_REQ, "DropAuthDataReq", nullptr};
constexpr utils::TypeInfo DropAuthDataRes::kType{utils::TypeId::REP_DROP_AUTH_DATA_RES, "DropAuthDataRes", nullptr};
void UpdateAuthDataReq::Save(const UpdateAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self, builder);
}
void UpdateAuthDataReq::Load(UpdateAuthDataReq *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(self, reader);
}
void UpdateAuthDataRes::Save(const UpdateAuthDataRes &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self, builder);
}
void UpdateAuthDataRes::Load(UpdateAuthDataRes *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(self, reader);
}
void DropAuthDataReq::Save(const DropAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self, builder);
}
void DropAuthDataReq::Load(DropAuthDataReq *self, memgraph::slk::Reader *reader) { memgraph::slk::Load(self, reader); }
void DropAuthDataRes::Save(const DropAuthDataRes &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self, builder);
}
void DropAuthDataRes::Load(DropAuthDataRes *self, memgraph::slk::Reader *reader) { memgraph::slk::Load(self, reader); }
} // namespace memgraph::replication

127
src/auth/rpc.hpp Normal file
View File

@ -0,0 +1,127 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#include <optional>
#include "auth/auth.hpp"
#include "auth/models.hpp"
#include "rpc/messages.hpp"
#include "slk/streams.hpp"
namespace memgraph::replication {
struct UpdateAuthDataReq {
static const utils::TypeInfo kType;
static const utils::TypeInfo &GetTypeInfo() { return kType; }
static void Load(UpdateAuthDataReq *self, memgraph::slk::Reader *reader);
static void Save(const UpdateAuthDataReq &self, memgraph::slk::Builder *builder);
UpdateAuthDataReq() = default;
UpdateAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
auth::User user)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
user{std::move(user)} {}
UpdateAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
auth::Role role)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
role{std::move(role)} {}
utils::UUID main_uuid;
std::string epoch_id;
uint64_t expected_group_timestamp;
uint64_t new_group_timestamp;
std::optional<auth::User> user;
std::optional<auth::Role> role;
};
struct UpdateAuthDataRes {
static const utils::TypeInfo kType;
static const utils::TypeInfo &GetTypeInfo() { return kType; }
static void Load(UpdateAuthDataRes *self, memgraph::slk::Reader *reader);
static void Save(const UpdateAuthDataRes &self, memgraph::slk::Builder *builder);
UpdateAuthDataRes() = default;
explicit UpdateAuthDataRes(bool success) : success{success} {}
bool success;
};
using UpdateAuthDataRpc = rpc::RequestResponse<UpdateAuthDataReq, UpdateAuthDataRes>;
struct DropAuthDataReq {
static const utils::TypeInfo kType;
static const utils::TypeInfo &GetTypeInfo() { return kType; }
static void Load(DropAuthDataReq *self, memgraph::slk::Reader *reader);
static void Save(const DropAuthDataReq &self, memgraph::slk::Builder *builder);
DropAuthDataReq() = default;
enum class DataType { USER, ROLE };
DropAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
DataType type, std::string_view name)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
type{type},
name{name} {}
utils::UUID main_uuid;
std::string epoch_id;
uint64_t expected_group_timestamp;
uint64_t new_group_timestamp;
DataType type;
std::string name;
};
struct DropAuthDataRes {
static const utils::TypeInfo kType;
static const utils::TypeInfo &GetTypeInfo() { return kType; }
static void Load(DropAuthDataRes *self, memgraph::slk::Reader *reader);
static void Save(const DropAuthDataRes &self, memgraph::slk::Builder *builder);
DropAuthDataRes() = default;
explicit DropAuthDataRes(bool success) : success{success} {}
bool success;
};
using DropAuthDataRpc = rpc::RequestResponse<DropAuthDataReq, DropAuthDataRes>;
} // namespace memgraph::replication
namespace memgraph::slk {
void Save(const auth::Role &self, memgraph::slk::Builder *builder);
void Load(auth::Role *self, memgraph::slk::Reader *reader);
void Save(const auth::User &self, memgraph::slk::Builder *builder);
void Load(auth::User *self, memgraph::slk::Reader *reader);
void Save(const auth::Auth::Config &self, memgraph::slk::Builder *builder);
void Load(auth::Auth::Config *self, memgraph::slk::Reader *reader);
void Save(const memgraph::replication::UpdateAuthDataRes &self, memgraph::slk::Builder *builder);
void Load(memgraph::replication::UpdateAuthDataRes *self, memgraph::slk::Reader *reader);
void Save(const memgraph::replication::UpdateAuthDataReq & /*self*/, memgraph::slk::Builder * /*builder*/);
void Load(memgraph::replication::UpdateAuthDataReq * /*self*/, memgraph::slk::Reader * /*reader*/);
void Save(const memgraph::replication::DropAuthDataRes &self, memgraph::slk::Builder *builder);
void Load(memgraph::replication::DropAuthDataRes *self, memgraph::slk::Reader *reader);
void Save(const memgraph::replication::DropAuthDataReq & /*self*/, memgraph::slk::Builder * /*builder*/);
void Load(memgraph::replication::DropAuthDataReq * /*self*/, memgraph::slk::Reader * /*reader*/);
} // namespace memgraph::slk

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -15,6 +15,9 @@
#include "communication/bolt/v1/value.hpp"
#include "utils/logging.hpp"
#include "communication/bolt/v1/fmt.hpp"
#include "io/network/fmt.hpp"
namespace {
constexpr uint8_t kBoltV43Version[4] = {0x00, 0x00, 0x03, 0x04};
constexpr uint8_t kEmptyBoltVersion[4] = {0x00, 0x00, 0x00, 0x00};

View File

@ -0,0 +1,27 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#if FMT_VERSION > 90000
#include <fmt/ostream.h>
#include "communication/bolt/v1/value.hpp"
template <>
class fmt::formatter<memgraph::communication::bolt::Value> : public fmt::ostream_formatter {};
template <>
class fmt::formatter<std::vector<memgraph::communication::bolt::Value>> : public fmt::ostream_formatter {};
template <>
class fmt::formatter<std::map<std::string, memgraph::communication::bolt::Value>> : public fmt::ostream_formatter {};
#endif

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -88,6 +88,12 @@ class Session {
virtual void Configure(const std::map<std::string, memgraph::communication::bolt::Value> &run_time_info) = 0;
#ifdef MG_ENTERPRISE
virtual auto Route(std::map<std::string, Value> const &routing,
std::vector<memgraph::communication::bolt::Value> const &bookmarks,
std::map<std::string, Value> const &extra) -> std::map<std::string, Value> = 0;
#endif
/**
* Put results of the processed query in the `encoder`.
*

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -79,9 +79,9 @@ State RunHandlerV4(Signature signature, TSession &session, State state, Marker m
}
case Signature::Route: {
if constexpr (bolt_minor >= 3) {
if (signature == Signature::Route) return HandleRoute<TSession>(session, marker);
return HandleRoute<TSession>(session, marker);
} else {
spdlog::trace("Supported only in bolt v4.3");
spdlog::trace("Supported only in bolt versions >= 4.3");
return State::Close;
}
}

View File

@ -478,9 +478,6 @@ State HandleGoodbye() {
template <typename TSession>
State HandleRoute(TSession &session, const Marker marker) {
// Route message is not implemented since it is Neo4j specific, therefore we will receive it and inform user that
// there is no implementation. Before that, we have to read out the fields from the buffer to leave it in a clean
// state.
if (marker != Marker::TinyStruct3) {
spdlog::trace("Expected TinyStruct3 marker, but received 0x{:02x}!", utils::UnderlyingCast(marker));
return State::Close;
@ -496,11 +493,27 @@ State HandleRoute(TSession &session, const Marker marker) {
spdlog::trace("Couldn't read bookmarks field!");
return State::Close;
}
// TODO: (andi) Fix Bolt versions
Value db;
if (!session.decoder_.ReadValue(&db)) {
spdlog::trace("Couldn't read db field!");
return State::Close;
}
#ifdef MG_ENTERPRISE
try {
auto res = session.Route(routing.ValueMap(), bookmarks.ValueList(), {});
if (!session.encoder_.MessageSuccess(std::move(res))) {
spdlog::trace("Couldn't send result of routing!");
return State::Close;
}
return State::Idle;
} catch (const std::exception &e) {
return HandleFailure(session, e);
}
#else
session.encoder_buffer_.Clear();
bool fail_sent =
session.encoder_.MessageFailure({{"code", "66"}, {"message", "Route message is not supported in Memgraph!"}});
@ -509,6 +522,7 @@ State HandleRoute(TSession &session, const Marker marker) {
return State::Close;
}
return State::Error;
#endif
}
template <typename TSession>

20
src/communication/fmt.hpp Normal file
View File

@ -0,0 +1,20 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#if FMT_VERSION > 90000
#include <fmt/ostream.h>
#include <boost/asio/ip/tcp.hpp>
template <>
class fmt::formatter<boost::asio::ip::tcp::endpoint> : public fmt::ostream_formatter {};
#endif

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -21,6 +21,7 @@
#include <boost/beast/core.hpp>
#include "communication/context.hpp"
#include "communication/fmt.hpp"
#include "communication/http/session.hpp"
#include "utils/spin_lock.hpp"
#include "utils/synchronized.hpp"
@ -82,7 +83,7 @@ class Listener final : public std::enable_shared_from_this<Listener<TRequestHand
return;
}
spdlog::info("HTTP server is listening on {}:{}", endpoint.address(), endpoint.port());
spdlog::info("HTTP server is listening on {}", endpoint);
}
void DoAccept() {

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -23,6 +23,7 @@
#include "communication/session.hpp"
#include "io/network/epoll.hpp"
#include "io/network/fmt.hpp"
#include "io/network/socket.hpp"
#include "utils/logging.hpp"
#include "utils/signals.hpp"

Some files were not shown because too many files have changed in this diff Show More