Compare commits

...

76 Commits

Author SHA1 Message Date
Andi
3faa64a53c
Forbid TSAN with Jemalloc () 2024-03-25 07:20:55 +00:00
Andi
581767b491
Bump googletest to 1.14 () 2024-03-23 14:45:12 +00:00
Andi
b228b431a8
Fix installation of mgcxx () 2024-03-22 17:31:01 +01:00
Antonio Filipovic
13e3a1d0f7
Add distributed locks in HA ()
- Add distributed locks
- Fix the wrong MAIN state on the follower coordinator
- Fix wrong main doing failover
2024-03-22 11:34:33 +00:00
Marko Barišić
89e13109d7
Fix jepsen nodes not starting up healthy ()
* add a loop to check if all nodes started correctly and restart if any failed
2024-03-21 18:39:40 +01:00
DavIvek
56be736d30
Fix and update mgbench () 2024-03-21 12:34:59 +00:00
DavIvek
a3d2474c5b
Fix timestamps saving on-disk () 2024-03-21 10:50:55 +00:00
Andi
0913e95167
Rename HA startup flags () 2024-03-21 09:12:28 +00:00
Andi
f699c0b37f
Support bolt+routing () 2024-03-21 06:41:26 +00:00
Ante Pušić
9629f10166
Text search (, )
Add text search:
* named property search
* all-property search
* regex search
* aggregation over search results

Text search works with:
* non-parallel transactions
* durability (WAL files and snapshots)
* multitenancy
2024-03-20 10:29:24 +01:00
Marko Barišić
2ac649f3b5
Upgrade jepsen ()
* Try with jepsen v0.3.5
* Add a few WIP adjustments
* Add replication restore state on startup flag
* Fix some run.sh scripts issues
* Improve cluster commands
* Run Jepsen on debian-12 with toolchain v5
---------
Co-authored-by: Marko Budiselic <mbudiselicbuda@gmail.com>
2024-03-18 16:38:58 +01:00
Marko Barišić
ec8536e11b
Make diff run on push to master again ()
* Add workflow dispatch and run on push to master
2024-03-18 11:58:34 +01:00
Marko Barišić
84fe853169
Fix cargo not found when buidling in mgbuild container ()
*Add source /home/mg/.cargo/env before cmake and make commands in mgbuild.sh
2024-03-18 10:47:59 +01:00
Josipmrden
082f9a7d9b
Add behaviour of no updates if vertex is updated with same value () 2024-03-15 14:45:21 +01:00
Aidar Samerkhanov
0ed2d18754
Add RollUpApply operator support to edge type index rewrite. () 2024-03-15 11:39:37 +04:00
Gareth Andrew Lloyd
8bc8e867e4
Pmr allocator unify ()
Query allocator and evaluation allocator were different.
After analysis, was determined they should be the same, this will help 
future development reduce TypeValue copies during queries.

Changes:
- Common allocator, PoolResource backed by MonotonicResource
- Optimized Pool, now O(1) alloc/dealloc as all chunks in Pool form a single 
  free list
- 2nd PoolResource, using bin sizing, not as perfect for memory usage but 
  O(1) bin selection
- Now have jemalloc's background thread to make sure decay and return 
  to OS happens
- Optimized ProperyValue to be faster at destruction/copy/move
- Less temporary memory allocations
  - CSV reader now maintains a common line buffer it reuses on line reads
  - Writing out bolt values, now reuses a values buffer
  - Evaluating an int no longer makes temporary strings for errors it most 
    likely never throws
  - ExpandVariable will reuse existing edge list in frame it one existed
2024-03-14 11:21:59 -07:00
Marko Barišić
b0cdcd3483
Run CI in mgbuilder containers ()
* Update deployment files for mgbuilders because of toolchain upgrade
* Fix args parameter in builder yaml files
* Add fedora 38, 39 and rockylinux 9.3 mgbuilder Dockerfiles
* Change format of ARG TOOLCHAIN_VERSION from toolchain-vX to vX
* Add function to check supported arch, build type, os and toolchain
* Add options to init subcommand
* Add image names to mgbuilders
* Add v2 of the run.sh script
* Add testing to run2.sh
* Add option for threads --thread
* Add options for enterprise license and organization name
* Make stop mgbuild container step run always
* Add --ci flag to init script
* Move init conditionals under build-memgraph flags
* Add --community flag to build-memgraph
* Change target dir inside mgbuild container
* Add node fix to debian 11, ubuntu 20.04 and ubuntu 22.04
* rm memgraph repo after installing deps
* Add mg user in Dockerfile
* Add step to install rust on all OSs
* Chown files copied into mgbuild container
* Add e2e tests
* Add jepsen test
* Bugfix: Using reference in a callback
* Bugfix: Broad target for e2e tests
* Up db info test limit
* Disable e2e streams tests
* Fix default THREADS
* Prioretize docker compose over docker-compose
* Improve selection between docker compose and docker-compose
* Install PyYAML as mg user
* Fix doxygen install for rocky linux 9.3
* Fix rocky-9.3 environment script to properly install sbcl
* Rename all rocky-9 mentions to rocky-9.3
* Add mgdeps-cache and benchgraph-api hostnames to mgbuild images
* Add logic to pull mgbuild image if missing
* Fix build errors on toolchain-v5 ()
* Rename run2 script, remove run script, add small features to mgbuild.sh
* Add --no-copy flag to build-memgraph to resolve TODO
* Add timeouts to diff jobs
* Fix asio flaky clone, try mgdeps-cache first

---------

Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
Co-authored-by: Ante Pušić <ante.f.pusic@gmail.com>
Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
2024-03-14 12:19:59 +01:00
Andi
24f8a14b43
Improve registration queries in HA environment() 2024-03-13 13:04:27 +00:00
Josipmrden
2cab07429e
Add new PR template () 2024-03-13 10:09:22 +01:00
DavIvek
de2e2048ef
Support label creation via property values () 2024-03-12 12:55:40 +00:00
Gareth Andrew Lloyd
a282542666
Optimise ORDER BY, RANGE, UNWIND ()
* Optimise frame change

* Optimise distinct + orderby memory usage

- dispose collections as earlier as possible
- move values rather than copy

* Better perf, ORDER BY

* Optimise RANGE and UNWIND

* ConstraintVerificationInfo only if at least one constraint

* Optimise TypeValue

* Clang-tidy fix
2024-03-12 00:26:11 +00:00
Josipmrden
462336ff78
Fix early exit for OR expression () 2024-03-11 22:44:15 +01:00
Aidar Samerkhanov
1c71d605ff
Fix PatternVisitor compilation in toolchain-v5 () 2024-03-08 19:20:40 -08:00
Antonio Filipovic
2a5388cea9
Add tests to verify log store works properly () 2024-03-08 15:16:30 +00:00
gvolfing
619b01f3f8
Implement edge type indices ()
Implement edge type indices ( )
2024-03-08 08:44:48 +01:00
Andi
5ca98f9543
Fix snapshot creation in RSM and forbid multiple leaders () 2024-03-07 17:40:32 +00:00
Aidar Samerkhanov
a099417c56
List Pattern Comprehension planner () 2024-03-07 18:41:02 +04:00
Antonio Filipovic
02325f8673
Fix bug prone add server to cluster behavior () 2024-03-07 11:10:33 +00:00
Katarina Supe
6f849a14df
Update cypherl transform script ()
* Update cypherl transform script

* Add new script and fix typo

* Add convert to separate files script

---------

Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2024-03-07 10:04:36 +01:00
Andi
75aad72984
Improve in-memory RAFT state () 2024-03-06 09:16:46 +01:00
Antonio Filipovic
d4d4660af0
Add force sync REPLICA with MAIN () 2024-03-05 16:51:14 +00:00
Andi
1802dc93d1
Improve Raft log serialization () 2024-03-05 07:33:13 +00:00
Andi
822183b62d
Support failure of coordinators () 2024-03-04 07:24:18 +00:00
Antonio Filipovic
33caa27161
Ensure replication works on HA cluster in different scenarios () 2024-03-01 12:32:56 +01:00
Marko Barišić
f316f7db87
Add openssl to MEMGRAPH_BUILD_DEPS for amzn-2 and centos-7 () 2024-02-28 18:21:56 +01:00
Gareth Andrew Lloyd
55f224839e
Do not use UUID_STR_LEN ()
Older libuuid did not have this macro, we need to publish for older
distro with older libs.
2024-02-28 17:46:03 +01:00
Antonio Filipovic
b561c61b64
HA: Add initial logic for choosing new replica () 2024-02-28 09:57:00 +00:00
DavIvek
b7de79d5a0
Fix schema.node_type_properties() and schema.rel_type_properties() () 2024-02-27 21:40:55 +00:00
Gareth Andrew Lloyd
da898be8f9
Compact Delta 80B -> 56B ()
Make special structure for old_disk_key. std::optional<std::string> was
40B, which is the largest member of out action union. Replaced with 8B,
structure.

This makes largest member now vertex_edge at 24B, this means Delta is
now only 56B.

🥳🎉 Now less than a cacheline 🎊
2024-02-27 17:21:52 +00:00
Gareth Andrew Lloyd
a6fcdfd905
Make GC + snapshot, main lock friendly ()
- Only IN_MEMORY_ANALYTICAL requires unique lock during snapshot
- GC in some cases will be provide with unique lock
  - This fact can be used for optimisations
  - In all other cases, optimisations should be done with alternative
    check. Not via getting a unique lock

Also:
- Faster property lookup
- Faster index iteration (better conditional branching)
2024-02-27 15:45:08 +01:00
Marko Barišić
e88c7a0aa5
Add jobs for pushing ARM packages ()
* Add jobs for pushing ARM packages
2024-02-27 12:08:53 +01:00
Marko Barišić
86ff96697d
Minor update to the rc workflow ()
* Increase ARM build timeout to 120 minutes

* Remove PushToS3 job and make each Package job push to S3 individually

* Expand ARM timeout to 150 minutes for added safety; revert this after release
2024-02-26 22:57:21 +01:00
andrejtonev
f4d9a3695d
Introduce multi-tenancy to SHOW REPLICAS ()
---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-02-26 19:05:49 +00:00
andrejtonev
c2e9df309a
Correctly call driver v1 tests () 2024-02-26 17:28:13 +00:00
andrejtonev
82c47ee80d
GetInfo simplification ()
* Removed force dir in the GetInfo functions
2024-02-26 14:55:45 +00:00
andrejtonev
6a4ef55e90
Better auth user/role handling ()
* Stop auth module from creating users
* Explicit about auth policy (check if no users defined OR auth module used)
* Role supports database access definition
* Authenticate() returns user or role
* AuthChecker generates QueryUserOrRole (can be empty)
* QueryUserOrRole actually authorizes
* Add auth cache invalidation
* Better database access queries (GRANT, DENY, REVOKE DATABASE)
2024-02-22 14:00:39 +00:00
Marko Budiselić
98727e0fa0
Update operating systems () 2024-02-22 11:14:48 +01:00
Aidar Samerkhanov
9a20ac494d
In BFS expansion filter by path we should shrink path to restore state prior to expansion only if the path was changed. () 2024-02-22 05:34:08 +00:00
Marko Barišić
e302be98a2
Push successful RC builds to S3 ()
* Add new workflow which calls release build workflows

* Make the workflow build packages only on RC tags

* Change artifact names to include OS name
2024-02-21 17:08:14 +01:00
Marko Budiselić
61b9bb0f59
Add toolchain-v5 compatibility Revert to C++20 ()
* Upgrade cppitertools, spdlog, fmt, rapidcheck
* Make compilation work on both v4 and v5 toolchains
2024-02-19 21:09:54 +01:00
Andi
7ec648b4ce
Add --experimental-enabled=high-availability () 2024-02-19 16:28:15 +00:00
Marko Budiselić
f098a9d5e3
Patch NuRaft for clang-17 compilation () 2024-02-19 14:50:37 +01:00
Josipmrden
bae3e8a6d3
Add function for property sizes ()
Add function for property sizes
2024-02-19 13:56:01 +01:00
Andi
f3574012c5
Add cpp23 support () 2024-02-19 10:36:51 +00:00
Gareth Andrew Lloyd
33c400fcc1
Fixup memory e2e tests ()
- Remove the e2e that did concurrent mgp_* calls on the same transaction
  (ATM this is unsupported)
- Fix up the concurrent mgp_global_alloc test to be testing it more precisely
- Reduce the memory limit on detach delete test due to recent memory
  optimizations around deltas.
- No longer throw from hook, through jemalloc C, to our C++ on other
  side. This cause mutex unlocks to not happen.
- No longer allocate error messages while inside the hook. This caused
  recursive entry back inside jamalloc which would try to relock a
  non-recursive mutex.
2024-02-16 15:35:08 +00:00
Marko Budiselić
5ac938a6c9
Remove default assignees from issue-bug template () 2024-02-16 14:41:53 +01:00
Andi
3e3224f0a2
Forbid having multiple mains in the cluster () 2024-02-16 11:41:15 +00:00
Antonio Filipovic
bfc756c092
HA: Polish flow for replicas from coordinator () 2024-02-16 10:58:01 +01:00
Marko Barišić
5f2e3f01d0
Turn e2e tests back on for release build workflows () 2024-02-15 16:20:04 +01:00
Marko Barišić
2c774ff09b
Add rules for rc workflows () 2024-02-15 15:33:14 +01:00
Andi
20b47845f0
Forbid writing to cluster-managed main on restart () 2024-02-15 14:07:04 +01:00
Andi
fb281459b9
Add support for unregistering replication instances () 2024-02-14 14:24:59 +00:00
Andi
3a7e62f72c
Forbid branching when registering replica in auto-managed cluster () 2024-02-14 08:02:51 +00:00
Gareth Andrew Lloyd
f48151576b
System replication experimental flag ()
- Remove the compile time control
- Introduce the runtime control flag

New flag `--experimental-enabled=system-replication`
2024-02-13 12:57:18 +00:00
Andi
4a7c7f0898
Distributed coordinators () 2024-02-13 08:49:28 +00:00
Ivan Milinović
7688a1b068
Fix unbound variable causing crash inside subquery () 2024-02-13 01:10:03 +01:00
Antonio Filipovic
4f4a569c72
Revert replication tests () 2024-02-12 16:42:57 +01:00
Ivan Milinović
a511e63c7a
Fix memory tracker counting wrong after OOM () 2024-02-11 20:29:06 +01:00
DavIvek
0133673f1d
Add support for query params in load csv () 2024-02-09 18:26:27 +01:00
DavIvek
786cdea260
Fix go driver test () 2024-02-09 17:07:30 +01:00
Antonio Filipovic
54f78f9217
Revert e2e tests and remove flaky ones () 2024-02-09 12:55:31 +01:00
Marko Barišić
dcdbd0a19a
Fix primary urls () 2024-02-08 14:19:30 +01:00
Andi
cf80687d1d
HA: Organize Raft coordinator group () 2024-02-08 09:11:33 +00:00
Aidar Samerkhanov
2fa8e00124
Fix accumulated path evaluation in builtin algorithms. ()
Fix accumulated path evaluation in DFS, BFS, WeghtedShortestPath and AllShortestPath algorithm.
2024-02-08 10:48:54 +04:00
Antonio Filipovic
c15b62a88d
HA: Disable replication from old main () 2024-02-07 11:20:47 +01:00
Gareth Andrew Lloyd
4ef6a1f9c3
Improve memory handling of Deltas ()
- Reduce delta from 104B to 80B
- Hold and pass them around as in a deque
- Detect and deleted deltas within commit if safe to do so
2024-02-06 18:07:38 +01:00
578 changed files with 29454 additions and 7118 deletions
.clang-tidy
.github
CMakeLists.txt
environment
import
include
init
libs
query_modules
release/package
src

View File

@ -6,6 +6,7 @@ Checks: '*,
-altera-unroll-loops,
-android-*,
-cert-err58-cpp,
-cppcoreguidelines-avoid-do-while,
-cppcoreguidelines-avoid-c-arrays,
-cppcoreguidelines-avoid-goto,
-cppcoreguidelines-avoid-magic-numbers,
@ -60,10 +61,11 @@ Checks: '*,
-readability-implicit-bool-conversion,
-readability-magic-numbers,
-readability-named-parameter,
-readability-identifier-length,
-misc-no-recursion,
-concurrency-mt-unsafe,
-bugprone-easily-swappable-parameters'
-bugprone-easily-swappable-parameters,
-bugprone-unchecked-optional-access'
WarningsAsErrors: ''
HeaderFilterRegex: 'src/.*'
AnalyzeTemporaryDtors: false

View File

@ -3,7 +3,6 @@ name: Bug report
about: Create a report to help us improve
title: ""
labels: bug
assignees: gitbuda
---
**Memgraph version**

View File

@ -1,14 +1,28 @@
### Description
Please briefly explain the changes you made here.
Please delete either the [master < EPIC] or [master < Task] part, depending on what are your needs.
[master < Epic] PR
- [ ] Check, and update documentation if necessary
- [ ] Write E2E tests
- [ ] Compare the [benchmarking results](https://bench-graph.memgraph.com/) between the master branch and the Epic branch
- [ ] Provide the full content or a guide for the final git message
- [FINAL GIT MESSAGE]
[master < Task] PR
- [ ] Check, and update documentation if necessary
- [ ] Provide the full content or a guide for the final git message
- **[FINAL GIT MESSAGE]**
To keep docs changelog up to date, one more thing to do:
- [ ] Write a release note here, including added/changed clauses
### Documentation checklist
- [ ] Add the documentation label tag
- [ ] Add the bug / feature label tag
- [ ] Add the milestone for which this feature is intended
- If not known, set for a later milestone
- [ ] Write a release note, including added/changed clauses
- **[Release note text]**
- [ ] Link the documentation PR here
- **[Documentation PR link]**
- [ ] Tag someone from docs team in the comments

View File

@ -19,11 +19,16 @@ on:
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
@ -33,35 +38,56 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build community binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# Initialize dependencies.
./init
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --community
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
code_analysis:
name: "Code analysis"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
@ -71,6 +97,14 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# This is also needed if we want do to comparison against other branches
# See https://github.community/t/checkout-code-fails-when-it-runs-lerna-run-test-since-master/17920
- name: Fetch all history for all tags and branches
@ -78,11 +112,13 @@ jobs:
- name: Initialize deps
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --init-only
- name: Set base branch
if: ${{ github.event_name == 'pull_request' }}
@ -96,45 +132,43 @@ jobs:
- name: Python code analysis
run: |
CHANGED_FILES=$(git diff -U0 ${{ env.BASE_BRANCH }}... --name-only --diff-filter=d)
for file in ${CHANGED_FILES}; do
echo ${file}
if [[ ${file} == *.py ]]; then
python3 -m black --check --diff ${file}
python3 -m isort --profile black --check-only --diff ${file}
fi
done
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-analysis --base-branch "${{ env.BASE_BRANCH }}"
- name: Build combined ASAN, UBSAN and coverage binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
cd build
cmake -DTEST_COVERAGE=ON -DASAN=ON -DUBSAN=ON ..
make -j$THREADS memgraph__unit
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --coverage --asan --ubsan
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests. It is restricted to 2 threads intentionally, because higher concurrency makes the timing related tests unstable.
cd build
LSAN_OPTIONS=suppressions=$PWD/../tools/lsan.supp UBSAN_OPTIONS=halt_on_error=1 ctest -R memgraph__unit --output-on-failure -j2
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit-coverage
- name: Compute code coverage
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Compute code coverage.
cd tools/github
./coverage_convert
# Package code coverage.
cd generated
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-coverage
- name: Save code coverage
uses: actions/upload-artifact@v4
@ -144,21 +178,36 @@ jobs:
- name: Run clang-tidy
run: |
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph clang-tidy --base-branch "${{ env.BASE_BRANCH }}"
# Restrict clang-tidy results only to the modified parts
git diff -U0 ${{ env.BASE_BRANCH }}... -- src | ./tools/github/clang-tidy/clang-tidy-diff.py -p 1 -j $THREADS -path build -regex ".+\.cpp" | tee ./build/clang_tidy_output.txt
# Fail if any warning is reported
! cat ./build/clang_tidy_output.txt | ./tools/github/clang-tidy/grep_error_lines.sh > /dev/null
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
@ -168,44 +217,78 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run leftover CTest tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run leftover CTest tests (all except unit and benchmark tests).
cd build
ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph leftover-CTest
- name: Run drivers tests
run: |
./tests/drivers/run.sh
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers
- name: Run HA driver tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers-high-availability
- name: Run integration tests
run: |
tests/integration/run.sh
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph integration
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run cppcheck and clang-format.
cd tools/github
./cppcheck_and_clang_format diff
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph cppcheck-and-clang-format
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v4
@ -213,13 +296,27 @@ jobs:
name: "Code coverage(Debug build)"
path: tools/github/cppcheck_and_clang_format.txt
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
@ -229,26 +326,33 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph gql-behave
- name: Save quality assurance status
uses: actions/upload-artifact@v4
@ -260,13 +364,17 @@ jobs:
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# This step will be skipped because the e2e stream tests have been disabled
# We need to fix this as soon as possible
- name: Ensure Kafka and Pulsar are up
if: false
run: |
@ -276,14 +384,16 @@ jobs:
docker-compose up -d
- name: Run e2e tests
if: false
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph e2e
# Same as two steps prior
- name: Ensure Kafka and Pulsar are down
if: false
run: |
@ -294,171 +404,92 @@ jobs:
- name: Run stress test (plain)
run: |
cd tests/stress
source ve3/bin/activate
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-plain
- name: Run stress test (SSL)
run: |
cd tests/stress
source ve3/bin/activate
./continuous_integration --use-ssl
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-ssl
- name: Run durability test
run: |
cd tests/stress
source ve3/bin/activate
python3 durability --num-steps 5
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph durability
- name: Create enterprise DEB package
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
cd build
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
package-memgraph
# create mgconsole
# we use the -B to force the build
make -j$THREADS -B mgconsole
# Create enterprise DEB package.
mkdir output && cd output
cpack -G DEB --config ../CPackConfig.cmake
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --package
- name: Save enterprise DEB package
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
path: build/output/memgraph*.deb
path: build/output/${{ env.OS }}/memgraph*.deb
- name: Copy build logs
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --build-logs
- name: Save test data
uses: actions/upload-artifact@v4
if: always()
with:
name: "Test data(Release build)"
path: |
# multiple paths could be defined
build/logs
path: build/logs
experimental_build_ha:
name: "High availability build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
source /opt/toolchain-v4/activate
./init
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DMG_EXPERIMENTAL_HIGH_AVAILABILITY=ON ..
make -j$THREADS
- name: Run unit tests
run: |
source /opt/toolchain-v4/activate
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Run e2e tests
if: false
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh "Coordinator"
./run.sh "Client initiated failover"
./run.sh "Uninitialized cluster"
- name: Save test data
uses: actions/upload-artifact@v4
- name: Stop mgbuild container
if: always()
with:
name: "Test data(High availability build)"
path: |
# multiple paths could be defined
build/logs
experimental_build_mt:
name: "MultiTenancy replication build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build MT replication experimental binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=Release -D MG_EXPERIMENTAL_REPLICATION_MULTITENANCY=ON ..
make -j$THREADS
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Run e2e tests
if: false
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
# Just the replication based e2e tests
./run.sh "Replicate multitenancy"
./run.sh "Show"
./run.sh "Show while creating invalid state"
./run.sh "Delete edge replication"
./run.sh "Read-write benchmark"
./run.sh "Index replication"
./run.sh "Constraints"
- name: Save test data
uses: actions/upload-artifact@v4
if: always()
with:
name: "Test data(MultiTenancy replication build)"
path: |
# multiple paths could be defined
build/logs
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_jepsen_test:
name: "Release Jepsen Test"
runs-on: [self-hosted, Linux, X64, Debian10, JepsenControl]
#continue-on-error: true
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 80
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-12
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
@ -468,16 +499,31 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binarie.
cd build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
make -j$THREADS memgraph
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Copy memgraph binary
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --binary
- name: Refresh Jepsen Cluster
run: |
@ -496,13 +542,27 @@ jobs:
name: "Jepsen Report"
path: tests/jepsen/Jepsen.tar.gz
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_benchmarks:
name: "Release benchmarks"
runs-on: [self-hosted, Linux, X64, Diff, Gen7]
runs-on: [self-hosted, Linux, X64, DockerMgBuild, Gen7]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
@ -512,25 +572,33 @@ jobs:
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run macro benchmarks
run: |
cd tests/macro_benchmark
./harness QuerySuite MemgraphRunner \
--groups aggregation 1000_create unwind_create dense_expand match \
--no-strict
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph macro-benchmark
- name: Get branch name (merge)
if: github.event_name != 'pull_request'
@ -544,30 +612,49 @@ jobs:
- name: Upload macro benchmark results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "macro_benchmark" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "macro_benchmark" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id ${{ github.run_id }} \
--github-run-number ${{ github.run_number }} \
--head-branch-name ${{ env.BRANCH_NAME }}
# TODO (andi) No need for path flags and for --disk-storage and --in-memory-analytical
- name: Run mgbench
run: |
cd tests/mgbench
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph mgbench
- name: Upload mgbench results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove

View File

@ -0,0 +1,208 @@
name: Release build test
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
push:
branches:
- "release/**"
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
# UTC
- cron: "0 22 * * *"
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
Debian10:
uses: ./.github/workflows/release_debian10.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
Ubuntu20_04:
uses: ./.github/workflows/release_ubuntu2004.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
PackageDebian10:
if: github.ref_type == 'tag'
needs: [Debian10]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
PackageUbuntu20_04:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04
path: build/output/ubuntu-22.04/memgraph*.deb
PackageUbuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/ubuntu-22.04-arm/memgraph*.deb
PushToS3Ubuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [PackageUbuntu20_04_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
PackageDebian11:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
PackageDebian11_ARM:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11-aarch64
path: build/output/debian-11-arm/memgraph*.deb
PushToS3Debian11_ARM:
if: github.ref_type == 'tag'
needs: [PackageDebian11_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: debian-11-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"

View File

@ -1,6 +1,12 @@
name: Release Debian 10
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
inputs:
build_type:
@ -11,10 +17,8 @@ on:
- Release
- RelWithDebInfo
schedule:
- cron: "0 22 * * *"
env:
OS: "Debian10"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
@ -111,7 +115,7 @@ jobs:
- name: Save code coverage
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Coverage build)"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
@ -165,7 +169,7 @@ jobs:
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Debug build)"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
debug_integration_test:
@ -242,7 +246,7 @@ jobs:
- name: Save enterprise DEB package
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS}}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
@ -255,7 +259,7 @@ jobs:
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status"
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
@ -321,7 +325,6 @@ jobs:
--no-strict
release_e2e_test:
if: false
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
@ -456,5 +459,5 @@ jobs:
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: "Jepsen Report"
name: "Jepsen Report-${{ env.OS }}"
path: tests/jepsen/Jepsen.tar.gz

View File

@ -1,6 +1,12 @@
name: Release Ubuntu 20.04
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
inputs:
build_type:
@ -11,10 +17,8 @@ on:
- Release
- RelWithDebInfo
schedule:
- cron: "0 22 * * *"
env:
OS: "Ubuntu 20.04"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
@ -107,7 +111,7 @@ jobs:
- name: Save code coverage
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Coverage build)"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
@ -161,7 +165,7 @@ jobs:
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Debug build)"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
debug_integration_test:
@ -238,7 +242,7 @@ jobs:
- name: Save enterprise DEB package
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS }}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
@ -251,7 +255,7 @@ jobs:
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status"
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
@ -317,7 +321,6 @@ jobs:
--no-strict
release_e2e_test:
if: false
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60

View File

@ -1,4 +1,7 @@
name: Stress test large
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
@ -10,7 +13,10 @@ on:
options:
- Release
- RelWithDebInfo
push:
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
- cron: "0 22 * * *"

View File

@ -211,8 +211,13 @@ set(CMAKE_CXX_FLAGS_RELWITHDEBINFO
# ** Static linking is allowed only for executables! **
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++")
# Use lld linker to speedup build
add_link_options(-fuse-ld=lld) # TODO: use mold linker
# Use lld linker to speedup build and use less memory.
add_link_options(-fuse-ld=lld)
# NOTE: Moving to latest Clang (probably starting from 15), lld stopped to work
# without explicit link_directories call.
string(REPLACE ":" " " LD_LIBS $ENV{LD_LIBRARY_PATH})
separate_arguments(LD_LIBS)
link_directories(${LD_LIBS})
# release flags
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -DNDEBUG")
@ -271,18 +276,6 @@ endif()
set(libs_dir ${CMAKE_SOURCE_DIR}/libs)
add_subdirectory(libs EXCLUDE_FROM_ALL)
option(MG_EXPERIMENTAL_HIGH_AVAILABILITY "Feature flag for experimental high availability" OFF)
if (NOT MG_ENTERPRISE AND MG_EXPERIMENTAL_HIGH_AVAILABILITY)
set(MG_EXPERIMENTAL_HIGH_AVAILABILITY OFF)
message(FATAL_ERROR "MG_EXPERIMENTAL_HIGH_AVAILABILITY must be used with enterpise version of the code.")
endif ()
if (MG_EXPERIMENTAL_HIGH_AVAILABILITY)
add_compile_definitions(MG_EXPERIMENTAL_HIGH_AVAILABILITY)
endif ()
# Optional subproject configuration -------------------------------------------
option(TEST_COVERAGE "Generate coverage reports from running memgraph" OFF)
option(TOOLS "Build tools binaries" ON)
option(QUERY_MODULES "Build query modules containing custom procedures" ON)
@ -291,16 +284,6 @@ option(TSAN "Build with Thread Sanitizer. To get a reasonable performance option
option(UBSAN "Build with Undefined Behaviour Sanitizer" OFF)
# Build feature flags
option(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY "Feature flag for experimental replicaition of multitenacy" OFF)
if (NOT MG_ENTERPRISE AND MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
set(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY OFF)
message(FATAL_ERROR "MG_EXPERIMENTAL_REPLICATION_MULTITENANCY with community edition build isn't possible")
endif ()
if (MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
add_compile_definitions(MG_EXPERIMENTAL_REPLICATION_MULTITENANCY)
endif ()
if (TEST_COVERAGE)
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
@ -317,6 +300,19 @@ endif()
option(ENABLE_JEMALLOC "Use jemalloc" ON)
option(MG_MEMORY_PROFILE "If build should be setup for memory profiling" OFF)
if (MG_MEMORY_PROFILE AND ENABLE_JEMALLOC)
message(STATUS "Jemalloc has been disabled because MG_MEMORY_PROFILE is enabled")
set(ENABLE_JEMALLOC OFF)
endif ()
if (MG_MEMORY_PROFILE AND ASAN)
message(STATUS "ASAN has been disabled because MG_MEMORY_PROFILE is enabled")
set(ASAN OFF)
endif ()
if (MG_MEMORY_PROFILE)
add_compile_definitions(MG_MEMORY_PROFILE)
endif ()
if (ASAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
@ -341,6 +337,8 @@ if (ASAN)
endif()
if (TSAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
# ThreadSanitizer generally requires all code to be compiled with -fsanitize=thread.
# If some code (e.g. dynamic libraries) is not compiled with the flag, it can
# lead to false positive race reports, false negative race reports and/or
@ -356,7 +354,7 @@ if (TSAN)
# By default ThreadSanitizer uses addr2line utility to symbolize reports.
# llvm-symbolizer is faster, consumes less memory and produces much better
# reports. To use it set runtime flag:
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# For more runtime flags see: https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
endif()

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -9,7 +7,7 @@ check_operating_system "amzn-2"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
gcc gcc-c++ make # generic build tools
git gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
@ -47,6 +45,7 @@ MEMGRAPH_BUILD_DEPS=(
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python3-pip nmap-ncat # for tests
#
@ -63,6 +62,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -45,6 +43,7 @@ MEMGRAPH_BUILD_DEPS=(
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python-virtualenv python3-pip nmap-ncat # for qa, macro_benchmark and stress tests
#
@ -63,6 +62,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -9,8 +7,10 @@ check_operating_system "centos-9"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
@ -64,6 +64,8 @@ MEMGRAPH_BUILD_DEPS=(
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
@ -123,7 +125,9 @@ install() {
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
# --nobest is used because of libipt because we install custom versions
# because libipt-devel is not available on CentOS 9 Stream
yum update -y --nobest
yum install -y wget git python3 python3-pip
for pkg in $1; do

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-10"
check_architecture "x86_64"

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-11"
check_architecture "arm64" "aarch64"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -61,6 +59,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)

134
environment/os/debian-12-arm.sh Executable file
View File

@ -0,0 +1,134 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "arm64" "aarch64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

136
environment/os/debian-12.sh Executable file
View File

@ -0,0 +1,136 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev libipt-dev libbabeltrace-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 libipt2 libbabeltrace1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat-traditional # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "fedora-36"
check_architecture "x86_64"
@ -27,6 +27,7 @@ TOOLCHAIN_BUILD_DEPS=(
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -27,6 +25,7 @@ TOOLCHAIN_BUILD_DEPS=(
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
@ -58,6 +57,16 @@ MEMGRAPH_BUILD_DEPS=(
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}

117
environment/os/fedora-39.sh Executable file
View File

@ -0,0 +1,117 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "fedora-39"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv python3-virtualenvwrapper python3-pyyaml nmap-ncat # for tests
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang zip unzip java-11-openjdk-devel # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
if [ -v LD_LIBRARY_PATH ]; then
# On Fedora 38 yum/dnf and python11 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
LD_LIBRARY_PATH=""
fi
local missing=""
for pkg in $1; do
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
if [ -v OLD_LD_LIBRARY_PATH ]; then
echo "Restoring LD_LIBRARY_PATH..."
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
dnf update -y
for pkg in $1; do
dnf install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

212
environment/os/rocky-9.3.sh Executable file
View File

@ -0,0 +1,212 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# TODO(gitbuda): Rocky gets automatically updates -> figure out how to handle it.
check_operating_system "rocky-9.3"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel perl-Unicode-EastAsianWidth texinfo libbabeltrace-devel # for gdb
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
perl # for openssl
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv nmap-ncat # for qa, macro_benchmark and stress tests
#
# IMPORTANT: python3-yaml does NOT exist on CentOS
# Install it manually using `pip3 install PyYAML`
#
PyYAML # Package name here does not correspond to the yum package!
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang custom-golang1.18.9 # for driver tests
zip unzip java-11-openjdk-devel java-17-openjdk java-17-openjdk-devel custom-maven3.9.3 # for driver tests
cl-asdf common-lisp-controller sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "PyYAML" ]; then
if ! python3 -c "import yaml" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "python3-virtualenv" ]; then
continue
fi
if ! yum list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
yum install -y wget git python3 python3-pip
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == perl-Unicode-EastAsianWidth ]; then
if ! dnf list installed perl-Unicode-EastAsianWidth >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/p/perl-Unicode-EastAsianWidth-12.0-7.el9.noarch.rpm
fi
continue
fi
if [ "$pkg" == texinfo ]; then
if ! dnf list installed texinfo >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/t/texinfo-6.7-15.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libbabeltrace-devel ]; then
if ! dnf list installed libbabeltrace-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libbabeltrace-devel-1.5.8-10.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libipt-devel ]; then
if ! dnf list installed libipt-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libipt-devel-2.0.4-5.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == doxygen ]; then
if ! dnf list installed doxygen >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/d/doxygen-1.9.1-11.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == cl-asdf ]; then
if ! dnf list installed cl-asdf >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/cl-asdf-20101028-18.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == common-lisp-controller ]; then
if ! dnf list installed common-lisp-controller >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/common-lisp-controller-7.4-20.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == sbcl ]; then
if ! dnf list installed sbcl >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/sbcl-2.0.1-4.el8.x86_64.rpm
fi
continue
fi
if [ "$pkg" == PyYAML ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install --user PyYAML
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install --user PyYAML"
fi
continue
fi
if [ "$pkg" == python3-virtualenv ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install virtualenv
pip3 install virtualenvwrapper
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenv"
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenvwrapper"
fi
continue
fi
yum install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -5,17 +5,20 @@ IFS=' '
# NOTE: docker_image_name could be local image build based on release/package images.
# NOTE: each line has to be under quotes, docker_container_type, script_name and docker_image_name separate with a space.
# "docker_container_type script_name docker_image_name"
# docker_container_type OPTIONS:
# * mgrun -> running plain/empty operating system for the purposes of testing native memgraph package
# * mgbuild -> running the builder container to build memgraph inside it -> it's possible create builder images using release/package/run.sh
OPERATING_SYSTEMS=(
"mgrun amzn-2 amazonlinux:2"
"mgrun centos-7 centos:7"
"mgrun centos-9 dokken/centos-stream-9"
"mgrun debian-10 debian:10"
"mgrun debian-11 debian:11"
"mgrun fedora-36 fedora:36"
"mgrun ubuntu-18.04 ubuntu:18.04"
"mgrun ubuntu-20.04 ubuntu:20.04"
"mgrun ubuntu-22.04 ubuntu:22.04"
# "mgbuild centos-7 package-mgbuild_centos-7"
# "mgrun amzn-2 amazonlinux:2"
# "mgrun centos-7 centos:7"
# "mgrun centos-9 dokken/centos-stream-9"
# "mgrun debian-10 debian:10"
# "mgrun debian-11 debian:11"
# "mgrun fedora-36 fedora:36"
# "mgrun ubuntu-18.04 ubuntu:18.04"
# "mgrun ubuntu-20.04 ubuntu:20.04"
# "mgrun ubuntu-22.04 ubuntu:22.04"
# "mgbuild debian-12 memgraph/memgraph-builder:v5_debian-12"
)
if [ ! "$(docker info)" ]; then
@ -33,14 +36,24 @@ print_help () {
# NOTE: This is an idempotent operation!
# TODO(gitbuda): Consider making docker_run always delete + start a new container or add a new function.
docker_run () {
cnt_name="$1"
cnt_image="$2"
cnt_type="$1"
if [[ "$cnt_type" != "mgbuild" && "$cnt_type" != "mgrun" ]]; then
echo "ERROR: Wrong docker_container_type -> valid options are mgbuild, mgrun"
exit 1
fi
cnt_name="$2"
cnt_image="$3"
if [ ! "$(docker ps -q -f name=$cnt_name)" ]; then
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
echo "Cleanup of the old exited container..."
docker rm $cnt_name
fi
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image" sleep infinity
if [[ "$cnt_type" == "mgbuild" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image"
fi
if [[ "$cnt_type" == "mgrun" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image" sleep infinity
fi
fi
echo "The $cnt_image container is active under $cnt_name name!"
}
@ -55,9 +68,9 @@ docker_stop_and_rm () {
cnt_name="$1"
if [ "$(docker ps -q -f name=$cnt_name)" ]; then
docker stop "$1"
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
docker rm "$1"
fi
fi
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
docker rm "$1"
fi
}
@ -71,7 +84,7 @@ start_all () {
docker_name="${docker_container_type}_$script_name"
echo ""
echo "~~~~ OPERATING ON $docker_image as $docker_name..."
docker_run "$docker_name" "$docker_image"
docker_run "$docker_container_type" "$docker_name" "$docker_image"
docker_exec "$docker_name" "/memgraph/environment/os/$script_name.sh install NEW_DEPS"
echo "---- DONE EVERYHING FOR $docker_image as $docker_name..."
echo ""

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -20,6 +18,10 @@ MEMGRAPH_BUILD_DEPS=(
pkg
)
MEMGRAPH_TEST_DEPS=(
pkg
)
MEMGRAPH_RUN_DEPS=(
pkg
)

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "ubuntu-18.04"
check_architecture "x86_64"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -60,6 +58,8 @@ MEMGRAPH_BUILD_DEPS=(
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)

View File

@ -2,3 +2,4 @@ archives
build
output
*.tar.gz
tmp_build.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash -e
# NOTE: Copy this under memgraph/environment/toolchain/vN/tmp_build.sh, edit and test.
pushd () { command pushd "$@" > /dev/null; }
popd () { command popd "$@" > /dev/null; }
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
TOOLCHAIN_VERSION=5
NAME=toolchain-v$TOOLCHAIN_VERSION
PREFIX=/opt/$NAME
function log_tool_name () {
echo ""
echo ""
echo "#### $1 ####"
echo ""
echo ""
}
# HERE: Remove/clear dependencies from a given toolchain.
mkdir -p archives && pushd archives
# HERE: Download dependencies here.
popd
mkdir -p build
pushd build
source $PREFIX/activate
export CC=$PREFIX/bin/clang
export CXX=$PREFIX/bin/clang++
export CFLAGS="$CFLAGS -fPIC"
export PATH=$PREFIX/bin:$PATH
export LD_LIBRARY_PATH=$PREFIX/lib64
COMMON_CMAKE_FLAGS="-DCMAKE_INSTALL_PREFIX=$PREFIX
-DCMAKE_PREFIX_PATH=$PREFIX
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_C_COMPILER=$CC
-DCMAKE_CXX_COMPILER=$CXX
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_CXX_STANDARD=20
-DBUILD_TESTING=OFF
-DCMAKE_REQUIRED_INCLUDES=$PREFIX/include
-DCMAKE_POSITION_INDEPENDENT_CODE=ON"
# HERE: Add dependencies to test below.

View File

@ -307,7 +307,7 @@ if [ ! -f $PREFIX/bin/ld.gold ]; then
fi
log_tool_name "GDB $GDB_VERSION"
if [ ! -f $PREFIX/bin/gdb ]; then
if [[ ! -f "$PREFIX/bin/gdb" && "$DISTRO" -ne "amzn-2" ]]; then
if [ -d gdb-$GDB_VERSION ]; then
rm -rf gdb-$GDB_VERSION
fi
@ -671,7 +671,6 @@ PROXYGEN_SHA256=5360a8ccdfb2f5a6c7b3eed331ec7ab0e2c792d579c6fff499c85c516c11fe14
WANGLE_SHA256=1002e9c32b6f4837f6a760016e3b3e22f3509880ef3eaad191c80dc92655f23f
# WANGLE_SHA256=0e493c03572bb27fe9ca03a9da5023e52fde99c95abdcaa919bb6190e7e69532
FLEX_VERSION=2.6.4
FMT_SHA256=78b8c0a72b1c35e4443a7e308df52498252d1cefc2b08c9a97bc9ee6cfe61f8b
FMT_VERSION=10.1.1
# NOTE: spdlog depends on exact fmt versions -> UPGRADE fmt and spdlog TOGETHER.
@ -690,8 +689,8 @@ LZ4_VERSION=1.9.4
SNAPPY_SHA256=75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7
SNAPPY_VERSION=1.1.9
XZ_VERSION=5.2.5 # for LZMA
ZLIB_VERSION=1.3
ZSTD_VERSION=1.5.0
ZLIB_VERSION=1.3.1
ZSTD_VERSION=1.5.5
pushd archives
if [ ! -f boost_$BOOST_VERSION_UNDERSCORES.tar.gz ]; then
@ -700,7 +699,7 @@ if [ ! -f boost_$BOOST_VERSION_UNDERSCORES.tar.gz ]; then
wget https://boostorg.jfrog.io/artifactory/main/release/$BOOST_VERSION/source/boost_$BOOST_VERSION_UNDERSCORES.tar.gz -O boost_$BOOST_VERSION_UNDERSCORES.tar.gz
fi
if [ ! -f bzip2-$BZIP2_VERSION.tar.gz ]; then
wget https://sourceforge.net/projects/bzip2/files/bzip2-$BZIP2_VERSION.tar.gz -O bzip2-$BZIP2_VERSION.tar.gz
wget https://sourceware.org/pub/bzip2/bzip2-$BZIP2_VERSION.tar.gz -O bzip2-$BZIP2_VERSION.tar.gz
fi
if [ ! -f double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz ]; then
wget https://github.com/google/double-conversion/archive/refs/tags/v$DOUBLE_CONVERSION_VERSION.tar.gz -O double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz
@ -708,9 +707,7 @@ fi
if [ ! -f fizz-$FBLIBS_VERSION.tar.gz ]; then
wget https://github.com/facebookincubator/fizz/releases/download/v$FBLIBS_VERSION/fizz-v$FBLIBS_VERSION.tar.gz -O fizz-$FBLIBS_VERSION.tar.gz
fi
if [ ! -f flex-$FLEX_VERSION.tar.gz ]; then
wget https://github.com/westes/flex/releases/download/v$FLEX_VERSION/flex-$FLEX_VERSION.tar.gz -O flex-$FLEX_VERSION.tar.gz
fi
if [ ! -f fmt-$FMT_VERSION.tar.gz ]; then
wget https://github.com/fmtlib/fmt/archive/refs/tags/$FMT_VERSION.tar.gz -O fmt-$FMT_VERSION.tar.gz
fi
@ -765,14 +762,6 @@ echo "$BZIP2_SHA256 bzip2-$BZIP2_VERSION.tar.gz" | sha256sum -c
echo "$DOUBLE_CONVERSION_SHA256 double-conversion-$DOUBLE_CONVERSION_VERSION.tar.gz" | sha256sum -c
# verify fizz
echo "$FIZZ_SHA256 fizz-$FBLIBS_VERSION.tar.gz" | sha256sum -c
# verify flex
if [ ! -f flex-$FLEX_VERSION.tar.gz.sig ]; then
wget https://github.com/westes/flex/releases/download/v$FLEX_VERSION/flex-$FLEX_VERSION.tar.gz.sig
fi
if false; then
$GPG --keyserver $KEYSERVER --recv-keys 0xE4B29C8D64885307
$GPG --verify flex-$FLEX_VERSION.tar.gz.sig flex-$FLEX_VERSION.tar.gz
fi
# verify fmt
echo "$FMT_SHA256 fmt-$FMT_VERSION.tar.gz" | sha256sum -c
# verify spdlog
@ -1025,7 +1014,6 @@ if [ ! -d $PREFIX/include/gflags ]; then
if [ -d gflags ]; then
rm -rf gflags
fi
git clone https://github.com/memgraph/gflags.git gflags
pushd gflags
git checkout $GFLAGS_COMMIT_HASH
@ -1034,7 +1022,7 @@ if [ ! -d $PREFIX/include/gflags ]; then
cmake .. $COMMON_CMAKE_FLAGS \
-DREGISTER_INSTALL_PREFIX=OFF \
-DBUILD_gflags_nothreads_LIB=OFF \
-DGFLAGS_NO_FILENAMES=0
-DGFLAGS_NO_FILENAMES=1
make -j$CPUS install
popd && popd
fi
@ -1232,18 +1220,6 @@ if false; then
fi
fi
log_tool_name "flex $FLEX_VERSION"
if [ ! -f $PREFIX/include/FlexLexer.h ]; then
if [ -d flex-$FLEX_VERSION ]; then
rm -rf flex-$FLEX_VERSION
fi
tar -xzf ../archives/flex-$FLEX_VERSION.tar.gz
pushd flex-$FLEX_VERSION
./configure $COMMON_CONFIGURE_FLAGS
make -j$CPUS install
popd
fi
popd
# NOTE: It's important/clean (e.g., easier upload to S3) to have a separated
# folder to the output archive.

View File

@ -20,14 +20,18 @@ if [ ! -f "$INPUT" ]; then
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create consraints manually if needed${COLOR_NULL}"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/^:begin/BEGIN/g; s/^BEGIN$/BEGIN;/g;' \
-e 's/^:commit/COMMIT/g; s/^COMMIT$/COMMIT;/g;' \
-e '/^CALL/d; /^SCHEMA AWAIT/d;' \
-e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d; /^DROP CONSTRAINT/d;' "$INPUT" > "$OUTPUT"
-e '/^CREATE CONSTRAINT/d; /^DROP CONSTRAINT/d;' "$INPUT" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"

View File

@ -0,0 +1,61 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_path"
exit 1
}
if [ "$#" -ne 5 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT="$5"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT"
cat "$INPUT_NODES" >> "$OUTPUT"
cat "$INPUT_RELATIONSHIPS" >> "$OUTPUT"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"
echo ""
echo "Please import data by executing => \`cat $OUTPUT | mgconsole\`"

View File

@ -0,0 +1,64 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_schema_path output_file_nodes_path output_file_relationships_path output_file_cleanup_path"
exit 1
}
if [ "$#" -ne 8 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT_SCHEMA="$5"
OUTPUT_NODES="$6"
OUTPUT_RELATIONSHIPS="$7"
OUTPUT_CLEANUP="$8"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT_SCHEMA"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT_SCHEMA"
cat "$INPUT_NODES" > "$OUTPUT_NODES"
cat "$INPUT_RELATIONSHIPS" > "$OUTPUT_RELATIONSHIPS"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT_CLEANUP"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT_CLEANUP"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher files under $OUTPUT_SCHEMA, $OUTPUT_NODES, $OUTPUT_RELATIONSHIPS and $OUTPUT_CLEANUP"
echo ""
echo "Please import data by executing => \`cat $OUTPUT_SCHEMA | mgconsole\`, \`cat $OUTPUT_NODES | mgconsole\`, \`cat $OUTPUT_RELATIONSHIPS | mgconsole\` and \`cat $OUTPUT_CLEANUP | mgconsole\`"

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -283,7 +283,7 @@ inline mgp_list *list_all_unique_constraints(mgp_graph *graph, mgp_memory *memor
}
// mgp_graph
inline bool graph_is_transactional(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_transactional, graph); }
inline bool graph_is_mutable(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_mutable, graph); }
@ -326,6 +326,21 @@ inline mgp_vertex *graph_get_vertex_by_id(mgp_graph *g, mgp_vertex_id id, mgp_me
return MgInvoke<mgp_vertex *>(mgp_graph_get_vertex_by_id, g, id, memory);
}
inline bool graph_has_text_index(mgp_graph *graph, const char *index_name) {
return MgInvoke<int>(mgp_graph_has_text_index, graph, index_name);
}
inline mgp_map *graph_search_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
text_search_mode search_mode, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_search_text_index, graph, index_name, search_query, search_mode, memory);
}
inline mgp_map *graph_aggregate_over_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
const char *aggregation_query, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_aggregate_over_text_index, graph, index_name, search_query, aggregation_query,
memory);
}
inline mgp_vertices_iterator *graph_iter_vertices(mgp_graph *g, mgp_memory *memory) {
return MgInvoke<mgp_vertices_iterator *>(mgp_graph_iter_vertices, g, memory);
}

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -891,6 +891,36 @@ enum mgp_error mgp_edge_iter_properties(struct mgp_edge *e, struct mgp_memory *m
enum mgp_error mgp_graph_get_vertex_by_id(struct mgp_graph *g, struct mgp_vertex_id id, struct mgp_memory *memory,
struct mgp_vertex **result);
/// Result is non-zero if the index with the given name exists.
/// The current implementation always returns without errors.
enum mgp_error mgp_graph_has_text_index(struct mgp_graph *graph, const char *index_name, int *result);
/// Available modes of searching text indices.
MGP_ENUM_CLASS text_search_mode{
SPECIFIED_PROPERTIES,
REGEX,
ALL_PROPERTIES,
};
/// Search the named text index for the given query. The result is a map with the "search_results" and "error_msg" keys.
/// The "search_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "search_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_search_text_index(struct mgp_graph *graph, const char *index_name, const char *search_query,
enum text_search_mode search_mode, struct mgp_memory *memory,
struct mgp_map **result);
/// Aggregate over the results of a search over the named text index. The result is a map with the "aggregation_results"
/// and "error_msg" keys.
/// The "aggregation_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "aggregation_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_aggregate_over_text_index(struct mgp_graph *graph, const char *index_name,
const char *search_query, const char *aggregation_query,
struct mgp_memory *memory, struct mgp_map **result);
/// Creates label index for given label.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if label index already exists, result will be 0, otherwise 1.

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -32,6 +32,15 @@
namespace mgp {
class TextSearchException : public std::exception {
public:
explicit TextSearchException(std::string message) : message_(std::move(message)) {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
class IndexException : public std::exception {
public:
explicit IndexException(std::string message) : message_(std::move(message)) {}
@ -4306,12 +4315,12 @@ inline void AddParamsReturnsToProc(mgp_proc *proc, std::vector<Parameter> &param
}
} // namespace detail
inline bool CreateLabelIndex(mgp_graph *memgaph_graph, const std::string_view label) {
return create_label_index(memgaph_graph, label.data());
inline bool CreateLabelIndex(mgp_graph *memgraph_graph, const std::string_view label) {
return create_label_index(memgraph_graph, label.data());
}
inline bool DropLabelIndex(mgp_graph *memgaph_graph, const std::string_view label) {
return drop_label_index(memgaph_graph, label.data());
inline bool DropLabelIndex(mgp_graph *memgraph_graph, const std::string_view label) {
return drop_label_index(memgraph_graph, label.data());
}
inline List ListAllLabelIndices(mgp_graph *memgraph_graph) {
@ -4322,14 +4331,14 @@ inline List ListAllLabelIndices(mgp_graph *memgraph_graph) {
return List(label_indices);
}
inline bool CreateLabelPropertyIndex(mgp_graph *memgaph_graph, const std::string_view label,
inline bool CreateLabelPropertyIndex(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return create_label_property_index(memgaph_graph, label.data(), property.data());
return create_label_property_index(memgraph_graph, label.data(), property.data());
}
inline bool DropLabelPropertyIndex(mgp_graph *memgaph_graph, const std::string_view label,
inline bool DropLabelPropertyIndex(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return drop_label_property_index(memgaph_graph, label.data(), property.data());
return drop_label_property_index(memgraph_graph, label.data(), property.data());
}
inline List ListAllLabelPropertyIndices(mgp_graph *memgraph_graph) {
@ -4340,6 +4349,58 @@ inline List ListAllLabelPropertyIndices(mgp_graph *memgraph_graph) {
return List(label_property_indices);
}
namespace {
constexpr std::string_view kErrorMsgKey = "error_msg";
constexpr std::string_view kSearchResultsKey = "search_results";
constexpr std::string_view kAggregationResultsKey = "aggregation_results";
} // namespace
inline List SearchTextIndex(mgp_graph *memgraph_graph, std::string_view index_name, std::string_view search_query,
text_search_mode search_mode) {
auto results_or_error = Map(mgp::MemHandlerCallback(graph_search_text_index, memgraph_graph, index_name.data(),
search_query.data(), search_mode));
if (results_or_error.KeyExists(kErrorMsgKey)) {
if (!results_or_error.At(kErrorMsgKey).IsString()) {
throw TextSearchException{"The error message is not a string!"};
}
throw TextSearchException(results_or_error.At(kErrorMsgKey).ValueString().data());
}
if (!results_or_error.KeyExists(kSearchResultsKey)) {
throw TextSearchException{"Incomplete text index search results!"};
}
if (!results_or_error.At(kSearchResultsKey).IsList()) {
throw TextSearchException{"Text index search results have wrong type!"};
}
return results_or_error.At(kSearchResultsKey).ValueList();
}
inline std::string_view AggregateOverTextIndex(mgp_graph *memgraph_graph, std::string_view index_name,
std::string_view search_query, std::string_view aggregation_query) {
auto results_or_error =
Map(mgp::MemHandlerCallback(graph_aggregate_over_text_index, memgraph_graph, index_name.data(),
search_query.data(), aggregation_query.data()));
if (results_or_error.KeyExists(kErrorMsgKey)) {
if (!results_or_error.At(kErrorMsgKey).IsString()) {
throw TextSearchException{"The error message is not a string!"};
}
throw TextSearchException(results_or_error.At(kErrorMsgKey).ValueString().data());
}
if (!results_or_error.KeyExists(kAggregationResultsKey)) {
throw TextSearchException{"Incomplete text index aggregation results!"};
}
if (!results_or_error.At(kAggregationResultsKey).IsString()) {
throw TextSearchException{"Text index aggregation results have wrong type!"};
}
return results_or_error.At(kAggregationResultsKey).ValueString();
}
inline bool CreateExistenceConstraint(mgp_graph *memgraph_graph, const std::string_view label,
const std::string_view property) {
return create_existence_constraint(memgraph_graph, label.data(), property.data());

34
init
View File

@ -14,6 +14,7 @@ function print_help () {
echo "Optional arguments:"
echo -e " -h\tdisplay this help and exit"
echo -e " --without-libs-setup\tskip the step for setting up libs"
echo -e " --ci\tscript is being run inside ci"
}
function setup_virtualenv () {
@ -35,6 +36,7 @@ function setup_virtualenv () {
}
setup_libs=true
ci=false
if [[ $# -eq 1 && "$1" == "-h" ]]; then
print_help
exit 0
@ -45,6 +47,10 @@ else
shift
setup_libs=false
;;
--ci)
shift
ci=true
;;
*)
# unknown option
echo "Invalid argument provided: $1"
@ -76,11 +82,13 @@ if [[ "$setup_libs" == "true" ]]; then
fi
# Fix for centos 7 during release
if [ "${DISTRO}" = "centos-7" ] || [ "${DISTRO}" = "debian-11" ] || [ "${DISTRO}" = "amzn-2" ]; then
if python3 -m pip show virtualenv >/dev/null 2>/dev/null; then
python3 -m pip uninstall -y virtualenv
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" = "centos-7" ] || [ "${DISTRO}" = "debian-11" ] || [ "${DISTRO}" = "amzn-2" ]; then
if python3 -m pip show virtualenv >/dev/null 2>/dev/null; then
python3 -m pip uninstall -y virtualenv
fi
python3 -m pip install virtualenv
fi
python3 -m pip install virtualenv
fi
# setup gql_behave dependencies
@ -119,14 +127,16 @@ fi
# Install precommit hook except on old operating systems because we don't
# develop on them -> pre-commit hook not required -> we can use latest
# packages.
if [ "${DISTRO}" != "centos-7" ] && [ "$DISTRO" != "debian-10" ] && [ "${DISTRO}" != "ubuntu-18.04" ] && [ "${DISTRO}" != "amzn-2" ]; then
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install py format tools for usage during the development.
echo "Install black formatter"
python3 -m pip install black==23.1.*
echo "Install isort"
python3 -m pip install isort==5.12.*
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" != "centos-7" ] && [ "$DISTRO" != "debian-10" ] && [ "${DISTRO}" != "ubuntu-18.04" ] && [ "${DISTRO}" != "amzn-2" ]; then
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install py format tools for usage during the development.
echo "Install black formatter"
python3 -m pip install black==23.1.*
echo "Install isort"
python3 -m pip install isort==5.12.*
fi
fi
# Link `include/mgp.py` with `release/mgp/mgp.py`

1
libs/.gitignore vendored
View File

@ -7,3 +7,4 @@
!pulsar.patch
!antlr4.10.1.patch
!rocksdb8.1.1.patch
!nuraft2.1.0.patch

View File

@ -16,7 +16,7 @@ set(GFLAGS_NOTHREADS OFF)
# NOTE: config/generate.py depends on the gflags help XML format.
find_package(gflags REQUIRED)
find_package(fmt 8.0.1)
find_package(fmt 8.0.1 REQUIRED)
find_package(ZLIB 1.2.11 REQUIRED)
set(LIB_DIR ${CMAKE_CURRENT_SOURCE_DIR})
@ -295,6 +295,34 @@ set_path_external_library(jemalloc STATIC
import_header_library(rangev3 ${CMAKE_CURRENT_SOURCE_DIR}/rangev3/include)
ExternalProject_Add(mgcxx-proj
PREFIX mgcxx-proj
GIT_REPOSITORY https://github.com/memgraph/mgcxx
GIT_TAG "v0.0.4"
CMAKE_ARGS
"-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR>"
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
"-DENABLE_TESTS=OFF"
"-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}"
"-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}"
INSTALL_DIR "${PROJECT_BINARY_DIR}/mgcxx"
)
ExternalProject_Get_Property(mgcxx-proj install_dir)
set(MGCXX_ROOT ${install_dir})
add_library(tantivy_text_search STATIC IMPORTED GLOBAL)
add_dependencies(tantivy_text_search mgcxx-proj)
set_property(TARGET tantivy_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libtantivy_text_search.a)
add_library(mgcxx_text_search STATIC IMPORTED GLOBAL)
add_dependencies(mgcxx_text_search mgcxx-proj)
set_property(TARGET mgcxx_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libmgcxx_text_search.a)
# We need to create the include directory first in order to be able to add it
# as an include directory. The header files in the include directory will be
# generated later during the build process.
file(MAKE_DIRECTORY ${MGCXX_ROOT}/include)
set_property(TARGET mgcxx_text_search PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${MGCXX_ROOT}/include)
# Setup NuRaft
import_external_library(nuraft STATIC
${CMAKE_CURRENT_SOURCE_DIR}/nuraft/lib/libnuraft.a

View File

@ -5,7 +5,7 @@ index ee9b58c..31359a9 100644
@@ -48,7 +48,7 @@ option(LIBRDTSC_USE_PMU "Enables PMU usage on ARM platforms" OFF)
# | Library Build and Install Properties |
# +--------------------------------------------------------+
-add_library(rdtsc SHARED
+add_library(rdtsc
src/cycles.c
@ -14,7 +14,7 @@ index ee9b58c..31359a9 100644
@@ -72,15 +72,6 @@ target_include_directories(rdtsc
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
-# Install directory changes depending on build mode
-if (CMAKE_BUILD_TYPE MATCHES "^[Dd]ebug")
- # During debug, the library will be installed into a local directory
@ -27,3 +27,15 @@ index ee9b58c..31359a9 100644
# Specifying what to export when installing (GNUInstallDirs required)
install(TARGETS rdtsc
EXPORT librstsc-config
diff --git a/include/librdtsc/common_timer.h b/include/librdtsc/common_timer.h
index a6922d8..080dc77 100644
--- a/include/librdtsc/common_timer.h
+++ b/include/librdtsc/common_timer.h
@@ -2,6 +2,7 @@
#define LIBRDTSC_COMMON_TIMER_H
#include <librdtsc/common.h>
+#include <librdtsc/cycles.h>
extern uint64_t rdtsc_get_tsc_freq_arch();
extern uint64_t rdtsc_get_tsc_freq();

24
libs/nuraft2.1.0.patch Normal file
View File

@ -0,0 +1,24 @@
diff --git a/include/libnuraft/asio_service_options.hxx b/include/libnuraft/asio_service_options.hxx
index 8fe1ec9..9497355 100644
--- a/include/libnuraft/asio_service_options.hxx
+++ b/include/libnuraft/asio_service_options.hxx
@@ -17,6 +17,7 @@ limitations under the License.
#pragma once
+#include <cstdint>
#include <functional>
#include <string>
#include <system_error>
diff --git a/include/libnuraft/callback.hxx b/include/libnuraft/callback.hxx
index 7b71624..d48c1e2 100644
--- a/include/libnuraft/callback.hxx
+++ b/include/libnuraft/callback.hxx
@@ -18,6 +18,7 @@ limitations under the License.
#ifndef _CALLBACK_H_
#define _CALLBACK_H_
+#include <cstdint>
#include <functional>
#include <string>

View File

@ -1,21 +0,0 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6761929..6a369af 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -220,6 +220,7 @@ else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer")
endif()
endif()
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-copy -Wno-unused-but-set-variable")
endif()
include(CheckCCompilerFlag)
@@ -997,7 +998,7 @@ if(NOT WIN32 OR ROCKSDB_INSTALL_ON_WINDOWS)
if(ROCKSDB_BUILD_SHARED)
install(
- TARGETS ${ROCKSDB_SHARED_LIB}
+ TARGETS ${ROCKSDB_SHARED_LIB} OPTIONAL
EXPORT RocksDBTargets
COMPONENT runtime
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"

View File

@ -123,10 +123,11 @@ declare -A primary_urls=(
["pulsar"]="http://$local_cache_host/git/pulsar.git"
["librdtsc"]="http://$local_cache_host/git/librdtsc.git"
["ctre"]="http://$local_cache_host/file/hanickadot/compile-time-regular-expressions/v3.7.2/single-header/ctre.hpp"
["absl"]="https://$local_cache_host/git/abseil-cpp.git"
["jemalloc"]="https://$local_cache_host/git/jemalloc.git"
["range-v3"]="https://$local_cache_host/git/ericniebler/range-v3.git"
["nuraft"]="https://$local_cache_host/git/eBay/NuRaft.git"
["absl"]="http://$local_cache_host/git/abseil-cpp.git"
["jemalloc"]="http://$local_cache_host/git/jemalloc.git"
["range-v3"]="http://$local_cache_host/git/range-v3.git"
["nuraft"]="http://$local_cache_host/git/NuRaft.git"
["asio"]="http://$local_cache_host/git/asio.git"
)
# The goal of secondary urls is to have links to the "source of truth" of
@ -157,6 +158,7 @@ declare -A secondary_urls=(
["jemalloc"]="https://github.com/jemalloc/jemalloc.git"
["range-v3"]="https://github.com/ericniebler/range-v3.git"
["nuraft"]="https://github.com/eBay/NuRaft.git"
["asio"]="https://github.com/chriskohlhoff/asio.git"
)
# antlr
@ -168,12 +170,11 @@ pushd antlr4
git apply ../antlr4.10.1.patch
popd
# cppitertools v2.0 2019-12-23
cppitertools_ref="cb3635456bdb531121b82b4d2e3afc7ae1f56d47"
cppitertools_ref="v2.1" # 2021-01-15
repo_clone_try_double "${primary_urls[cppitertools]}" "${secondary_urls[cppitertools]}" "cppitertools" "$cppitertools_ref"
# rapidcheck
rapidcheck_tag="7bc7d302191a4f3d0bf005692677126136e02f60" # (2020-05-04)
rapidcheck_tag="1c91f40e64d87869250cfb610376c629307bf77d" # (2023-08-15)
repo_clone_try_double "${primary_urls[rapidcheck]}" "${secondary_urls[rapidcheck]}" "rapidcheck" "$rapidcheck_tag"
# google benchmark
@ -181,7 +182,7 @@ benchmark_tag="v1.6.0"
repo_clone_try_double "${primary_urls[gbenchmark]}" "${secondary_urls[gbenchmark]}" "benchmark" "$benchmark_tag" true
# google test
googletest_tag="release-1.8.0"
googletest_tag="v1.14.0"
repo_clone_try_double "${primary_urls[gtest]}" "${secondary_urls[gtest]}" "googletest" "$googletest_tag" true
# libbcrypt
@ -221,7 +222,7 @@ repo_clone_try_double "${primary_urls[pymgclient]}" "${secondary_urls[pymgclient
mgconsole_tag="v1.4.0" # (2023-05-21)
repo_clone_try_double "${primary_urls[mgconsole]}" "${secondary_urls[mgconsole]}" "mgconsole" "$mgconsole_tag" true
spdlog_tag="v1.9.2" # (2021-08-12)
spdlog_tag="v1.12.0" # (2022-11-02)
repo_clone_try_double "${primary_urls[spdlog]}" "${secondary_urls[spdlog]}" "spdlog" "$spdlog_tag" true
# librdkafka
@ -267,11 +268,13 @@ repo_clone_try_double "${primary_urls[jemalloc]}" "${secondary_urls[jemalloc]}"
pushd jemalloc
./autogen.sh
MALLOC_CONF="retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000" \
MALLOC_CONF="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000" \
./configure \
--disable-cxx \
--with-lg-page=12 \
--with-lg-hugepage=21 \
--enable-shared=no --prefix=$working_dir \
--with-malloc-conf="retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000"
--with-malloc-conf="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000"
make -j$CPUS install
popd
@ -284,5 +287,8 @@ repo_clone_try_double "${primary_urls[range-v3]}" "${secondary_urls[range-v3]}"
nuraft_tag="v2.1.0"
repo_clone_try_double "${primary_urls[nuraft]}" "${secondary_urls[nuraft]}" "nuraft" "$nuraft_tag" true
pushd nuraft
git apply ../nuraft2.1.0.patch
asio_tag="asio-1-29-0"
repo_clone_try_double "${primary_urls[asio]}" "${secondary_urls[asio]}" "asio" "$asio_tag" true
./prepare.sh
popd

View File

@ -6,6 +6,8 @@ project(memgraph_query_modules)
disallow_in_source_build()
find_package(fmt REQUIRED)
# Everything that is installed here, should be under the "query_modules" component.
set(CMAKE_INSTALL_DEFAULT_COMPONENT_NAME "query_modules")
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
@ -58,6 +60,22 @@ install(PROGRAMS $<TARGET_FILE:schema>
# Also install the source of the example, so user can read it.
install(FILES schema.cpp DESTINATION lib/memgraph/query_modules/src)
add_library(text SHARED text_search_module.cpp)
target_include_directories(text PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(text PRIVATE -Wall)
target_link_libraries(text PRIVATE -static-libgcc -static-libstdc++ fmt::fmt)
# Strip C++ example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET text POST_BUILD
COMMAND strip -s $<TARGET_FILE:text>
COMMENT "Stripping symbols and sections from the C++ text_search module")
endif()
install(PROGRAMS $<TARGET_FILE:text>
DESTINATION lib/memgraph/query_modules
RENAME text.so)
# Also install the source of the example, so user can read it.
install(FILES text_search_module.cpp DESTINATION lib/memgraph/query_modules/src)
# Install the Python example and modules
install(FILES example.py DESTINATION lib/memgraph/query_modules RENAME py_example.py)
install(FILES graph_analyzer.py DESTINATION lib/memgraph/query_modules)

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -9,10 +9,11 @@
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <boost/functional/hash.hpp>
#include <mgp.hpp>
#include "utils/string.hpp"
#include <optional>
#include <unordered_set>
namespace Schema {
@ -37,6 +38,7 @@ constexpr std::string_view kParameterIndices = "indices";
constexpr std::string_view kParameterUniqueConstraints = "unique_constraints";
constexpr std::string_view kParameterExistenceConstraints = "existence_constraints";
constexpr std::string_view kParameterDropExisting = "drop_existing";
constexpr int kInitialNumberOfPropertyOccurances = 1;
std::string TypeOf(const mgp::Type &type);
@ -108,83 +110,79 @@ void Schema::ProcessPropertiesRel(mgp::Record &record, const std::string_view &t
record.Insert(std::string(kReturnMandatory).c_str(), mandatory);
}
struct Property {
std::string name;
mgp::Value value;
struct PropertyInfo {
std::unordered_set<std::string> property_types; // property types
int64_t number_of_property_occurrences = 0;
Property(const std::string &name, mgp::Value &&value) : name(name), value(std::move(value)) {}
PropertyInfo() = default;
explicit PropertyInfo(std::string &&property_type)
: property_types({std::move(property_type)}),
number_of_property_occurrences(Schema::kInitialNumberOfPropertyOccurances) {}
};
struct LabelsInfo {
std::unordered_map<std::string, PropertyInfo> properties; // key is a property name
int64_t number_of_label_occurrences = 0;
};
struct LabelsHash {
std::size_t operator()(const std::set<std::string> &set) const {
std::size_t seed = set.size();
for (const auto &i : set) {
seed ^= std::hash<std::string>{}(i) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
}
return seed;
}
std::size_t operator()(const std::set<std::string> &s) const { return boost::hash_range(s.begin(), s.end()); }
};
struct LabelsComparator {
bool operator()(const std::set<std::string> &lhs, const std::set<std::string> &rhs) const { return lhs == rhs; }
};
struct PropertyComparator {
bool operator()(const Property &lhs, const Property &rhs) const { return lhs.name < rhs.name; }
};
struct PropertyInfo {
std::set<Property, PropertyComparator> properties;
bool mandatory;
};
void Schema::NodeTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph, mgp_result *result,
mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
try {
std::unordered_map<std::set<std::string>, PropertyInfo, LabelsHash, LabelsComparator> node_types_properties;
std::unordered_map<std::set<std::string>, LabelsInfo, LabelsHash, LabelsComparator> node_types_properties;
for (auto node : mgp::Graph(memgraph_graph).Nodes()) {
for (const auto node : mgp::Graph(memgraph_graph).Nodes()) {
std::set<std::string> labels_set = {};
for (auto label : node.Labels()) {
for (const auto label : node.Labels()) {
labels_set.emplace(label);
}
if (node_types_properties.find(labels_set) == node_types_properties.end()) {
node_types_properties[labels_set] = PropertyInfo{std::set<Property, PropertyComparator>(), true};
}
node_types_properties[labels_set].number_of_label_occurrences++;
if (node.Properties().empty()) {
node_types_properties[labels_set].mandatory = false; // if there is node with no property, it is not mandatory
continue;
}
auto &property_info = node_types_properties.at(labels_set);
for (auto &[key, prop] : node.Properties()) {
property_info.properties.emplace(key, std::move(prop));
if (property_info.mandatory) {
property_info.mandatory =
property_info.properties.size() == 1; // if there is only one property, it is mandatory
auto &labels_info = node_types_properties.at(labels_set);
for (const auto &[key, prop] : node.Properties()) {
auto prop_type = TypeOf(prop.Type());
if (labels_info.properties.find(key) == labels_info.properties.end()) {
labels_info.properties[key] = PropertyInfo{std::move(prop_type)};
} else {
labels_info.properties[key].property_types.emplace(prop_type);
labels_info.properties[key].number_of_property_occurrences++;
}
}
}
for (auto &[labels, property_info] : node_types_properties) {
for (auto &[node_type, labels_info] : node_types_properties) { // node type is a set of labels
std::string label_type;
mgp::List labels_list = mgp::List();
for (auto const &label : labels) {
auto labels_list = mgp::List();
for (const auto &label : node_type) {
label_type += ":`" + std::string(label) + "`";
labels_list.AppendExtend(mgp::Value(label));
}
for (auto const &prop : property_info.properties) {
for (const auto &prop : labels_info.properties) {
auto prop_types = mgp::List();
for (const auto &prop_type : prop.second.property_types) {
prop_types.AppendExtend(mgp::Value(prop_type));
}
bool mandatory = prop.second.number_of_property_occurrences == labels_info.number_of_label_occurrences;
auto record = record_factory.NewRecord();
ProcessPropertiesNode(record, label_type, labels_list, prop.name, TypeOf(prop.value.Type()),
property_info.mandatory);
ProcessPropertiesNode(record, label_type, labels_list, prop.first, prop_types, mandatory);
}
if (property_info.properties.empty()) {
if (labels_info.properties.empty()) {
auto record = record_factory.NewRecord();
ProcessPropertiesNode<std::string>(record, label_type, labels_list, "", "", false);
ProcessPropertiesNode<mgp::List>(record, label_type, labels_list, "", mgp::List(), false);
}
}
@ -197,40 +195,45 @@ void Schema::NodeTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph,
void Schema::RelTypeProperties(mgp_list * /*args*/, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
std::unordered_map<std::string, PropertyInfo> rel_types_properties;
std::unordered_map<std::string, LabelsInfo> rel_types_properties;
const auto record_factory = mgp::RecordFactory(result);
try {
const mgp::Graph graph = mgp::Graph(memgraph_graph);
for (auto rel : graph.Relationships()) {
const auto graph = mgp::Graph(memgraph_graph);
for (const auto rel : graph.Relationships()) {
std::string rel_type = std::string(rel.Type());
if (rel_types_properties.find(rel_type) == rel_types_properties.end()) {
rel_types_properties[rel_type] = PropertyInfo{std::set<Property, PropertyComparator>(), true};
}
rel_types_properties[rel_type].number_of_label_occurrences++;
if (rel.Properties().empty()) {
rel_types_properties[rel_type].mandatory = false; // if there is rel with no property, it is not mandatory
continue;
}
auto &property_info = rel_types_properties.at(rel_type);
auto &labels_info = rel_types_properties.at(rel_type);
for (auto &[key, prop] : rel.Properties()) {
property_info.properties.emplace(key, std::move(prop));
if (property_info.mandatory) {
property_info.mandatory =
property_info.properties.size() == 1; // if there is only one property, it is mandatory
auto prop_type = TypeOf(prop.Type());
if (labels_info.properties.find(key) == labels_info.properties.end()) {
labels_info.properties[key] = PropertyInfo{std::move(prop_type)};
} else {
labels_info.properties[key].property_types.emplace(prop_type);
labels_info.properties[key].number_of_property_occurrences++;
}
}
}
for (auto &[type, property_info] : rel_types_properties) {
std::string type_str = ":`" + std::string(type) + "`";
for (auto const &prop : property_info.properties) {
for (auto &[rel_type, labels_info] : rel_types_properties) {
std::string type_str = ":`" + std::string(rel_type) + "`";
for (const auto &prop : labels_info.properties) {
auto prop_types = mgp::List();
for (const auto &prop_type : prop.second.property_types) {
prop_types.AppendExtend(mgp::Value(prop_type));
}
bool mandatory = prop.second.number_of_property_occurrences == labels_info.number_of_label_occurrences;
auto record = record_factory.NewRecord();
ProcessPropertiesRel(record, type_str, prop.name, TypeOf(prop.value.Type()), property_info.mandatory);
ProcessPropertiesRel(record, type_str, prop.first, prop_types, mandatory);
}
if (property_info.properties.empty()) {
if (labels_info.properties.empty()) {
auto record = record_factory.NewRecord();
ProcessPropertiesRel<std::string>(record, type_str, "", "", false);
ProcessPropertiesRel<mgp::List>(record, type_str, "", mgp::List(), false);
}
}

View File

@ -0,0 +1,149 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <string>
#include <string_view>
#include <fmt/format.h>
#include <mgp.hpp>
namespace TextSearch {
constexpr std::string_view kProcedureSearch = "search";
constexpr std::string_view kProcedureRegexSearch = "regex_search";
constexpr std::string_view kProcedureSearchAllProperties = "search_all";
constexpr std::string_view kProcedureAggregate = "aggregate";
constexpr std::string_view kParameterIndexName = "index_name";
constexpr std::string_view kParameterSearchQuery = "search_query";
constexpr std::string_view kParameterAggregationQuery = "aggregation_query";
constexpr std::string_view kReturnNode = "node";
constexpr std::string_view kReturnAggregation = "aggregation";
const std::string kSearchAllPrefix = "all";
void Search(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void RegexSearch(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void SearchAllProperties(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
void Aggregate(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory);
} // namespace TextSearch
void TextSearch::Search(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
for (const auto &node :
mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::SPECIFIED_PROPERTIES)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::RegexSearch(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
for (const auto &node : mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::REGEX)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::SearchAllProperties(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result,
mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = fmt::format("{}:{}", kSearchAllPrefix, arguments[1].ValueString()).data();
for (const auto &node :
mgp::SearchTextIndex(memgraph_graph, index_name, search_query, text_search_mode::ALL_PROPERTIES)) {
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnNode.data(), node.ValueNode());
}
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
void TextSearch::Aggregate(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
mgp::MemoryDispatcherGuard guard{memory};
const auto record_factory = mgp::RecordFactory(result);
auto arguments = mgp::List(args);
try {
const auto *index_name = arguments[0].ValueString().data();
const auto *search_query = arguments[1].ValueString().data();
const auto *aggregation_query = arguments[2].ValueString().data();
const auto aggregation_result =
mgp::AggregateOverTextIndex(memgraph_graph, index_name, search_query, aggregation_query);
auto record = record_factory.NewRecord();
record.Insert(TextSearch::kReturnAggregation.data(), aggregation_result.data());
} catch (const std::exception &e) {
record_factory.SetErrorMessage(e.what());
}
}
extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *memory) {
try {
mgp::MemoryDispatcherGuard guard{memory};
AddProcedure(TextSearch::Search, TextSearch::kProcedureSearch, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::RegexSearch, TextSearch::kProcedureRegexSearch, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::SearchAllProperties, TextSearch::kProcedureSearchAllProperties, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnNode, mgp::Type::Node)}, module, memory);
AddProcedure(TextSearch::Aggregate, TextSearch::kProcedureAggregate, mgp::ProcedureType::Read,
{
mgp::Parameter(TextSearch::kParameterIndexName, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterSearchQuery, mgp::Type::String),
mgp::Parameter(TextSearch::kParameterAggregationQuery, mgp::Type::String),
},
{mgp::Return(TextSearch::kReturnAggregation, mgp::Type::String)}, module, memory);
} catch (const std::exception &e) {
std::cerr << "Error while initializing query module: " << e.what() << std::endl;
return 1;
}
return 0;
}
extern "C" int mgp_shutdown_module() { return 0; }

View File

@ -0,0 +1,73 @@
version: "3"
services:
mgbuild_v4_amzn-2:
image: "memgraph/mgbuild:v4_amzn-2"
build:
context: amzn-2
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_amzn-2"
mgbuild_v4_centos-7:
image: "memgraph/mgbuild:v4_centos-7"
build:
context: centos-7
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_centos-7"
mgbuild_v4_centos-9:
image: "memgraph/mgbuild:v4_centos-9"
build:
context: centos-9
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_centos-9"
mgbuild_v4_debian-10:
image: "memgraph/mgbuild:v4_debian-10"
build:
context: debian-10
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-10"
mgbuild_v4_debian-11:
image: "memgraph/mgbuild:v4_debian-11"
build:
context: debian-11
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-11"
mgbuild_v4_fedora-36:
image: "memgraph/mgbuild:v4_fedora-36"
build:
context: fedora-36
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_fedora-36"
mgbuild_v4_ubuntu-18.04:
image: "memgraph/mgbuild:v4_ubuntu-18.04"
build:
context: ubuntu-18.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-18.04"
mgbuild_v4_ubuntu-20.04:
image: "memgraph/mgbuild:v4_ubuntu-20.04"
build:
context: ubuntu-20.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-20.04"
mgbuild_v4_ubuntu-22.04:
image: "memgraph/mgbuild:v4_ubuntu-22.04"
build:
context: ubuntu-22.04
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-22.04"

View File

@ -0,0 +1,81 @@
version: "3"
services:
mgbuild_v5_amzn-2:
image: "memgraph/mgbuild:v5_amzn-2"
build:
context: amzn-2
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_amzn-2"
mgbuild_v5_centos-7:
image: "memgraph/mgbuild:v5_centos-7"
build:
context: centos-7
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_centos-7"
mgbuild_v5_centos-9:
image: "memgraph/mgbuild:v5_centos-9"
build:
context: centos-9
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_centos-9"
mgbuild_v5_debian-11:
image: "memgraph/mgbuild:v5_debian-11"
build:
context: debian-11
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_debian-11"
mgbuild_v5_debian-12:
image: "memgraph/mgbuild:v5_debian-12"
build:
context: debian-12
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_debian-12"
mgbuild_v5_fedora-38:
image: "memgraph/mgbuild:v5_fedora-38"
build:
context: fedora-38
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_fedora-38"
mgbuild_v5_fedora-39:
image: "memgraph/mgbuild:v5_fedora-39"
build:
context: fedora-39
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_fedora-39"
mgbuild_v5_rocky-9.3:
image: "memgraph/mgbuild:v5_rocky-9.3"
build:
context: rocky-9.3
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_rocky-9.3"
mgbuild_v5_ubuntu-20.04:
image: "memgraph/mgbuild:v5_ubuntu-20.04"
build:
context: ubuntu-20.04
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_ubuntu-20.04"
mgbuild_v5_ubuntu-22.04:
image: "memgraph/mgbuild:v5_ubuntu-22.04"
build:
context: ubuntu-22.04
args:
TOOLCHAIN_VERSION: "v5"
container_name: "mgbuild_v5_ubuntu-22.04"

View File

@ -7,9 +7,34 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz
# Download and install toolchain
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-amzn-2-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/amzn-2.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/amzn-2.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,18 @@
version: "3"
services:
mgbuild_v4_debian-11-arm:
image: "memgraph/mgbuild:v4_debian-11-arm"
build:
context: debian-11-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_debian-11-arm"
mgbuild_v4_ubuntu_v4_22.04-arm:
image: "memgraph/mgbuild:v4_ubuntu-22.04-arm"
build:
context: ubuntu-22.04-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_v4_ubuntu-22.04-arm"

View File

@ -0,0 +1,18 @@
version: "3"
services:
debian-12-arm:
image: "memgraph/mgbuild:v5_debian-12-arm"
build:
context: debian-12-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_debian-12-arm"
ubuntu-22.04-arm:
image: "memgraph/mgbuild:v5_ubuntu-22.04-arm"
build:
context: ubuntu-22.04-arm
args:
TOOLCHAIN_VERSION: "v4"
container_name: "mgbuild_ubuntu-22.04-arm"

View File

@ -1,11 +0,0 @@
version: "3"
services:
debian-11-arm:
build:
context: debian-11-arm
container_name: "mgbuild_debian-11-arm"
ubuntu-2204-arm:
build:
context: ubuntu-22.04-arm
container_name: "mgbuild_ubuntu-22.04-arm"

View File

@ -7,9 +7,33 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-centos-7-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/centos-7.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/centos-7.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -7,9 +7,33 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-centos-9-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/centos-9.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/centos-9.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-10-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-10.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-10.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-11-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-11-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-11-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-11.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-11.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,39 @@
FROM debian:12
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y \
ca-certificates wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-12-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-12-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,39 @@
FROM debian:12
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y \
ca-certificates wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-debian-12-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/debian-12.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/debian-12.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -1,38 +0,0 @@
version: "3"
services:
mgbuild_centos-7:
build:
context: centos-7
container_name: "mgbuild_centos-7"
mgbuild_centos-9:
build:
context: centos-9
container_name: "mgbuild_centos-9"
mgbuild_debian-10:
build:
context: debian-10
container_name: "mgbuild_debian-10"
mgbuild_debian-11:
build:
context: debian-11
container_name: "mgbuild_debian-11"
mgbuild_ubuntu-18.04:
build:
context: ubuntu-18.04
container_name: "mgbuild_ubuntu-18.04"
mgbuild_ubuntu-20.04:
build:
context: ubuntu-20.04
container_name: "mgbuild_ubuntu-20.04"
mgbuild_ubuntu-22.04:
build:
context: ubuntu-22.04
container_name: "mgbuild_ubuntu-22.04"
mgbuild_fedora-36:
build:
context: fedora-36
container_name: "mgbuild_fedora-36"
mgbuild_amzn-2:
build:
context: amzn-2
container_name: "mgbuild_amzn-2"

View File

@ -8,9 +8,30 @@ RUN yum -y update \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-36-x86_64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-36.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-36.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,37 @@
FROM fedora:38
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-38-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-38.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-38.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -0,0 +1,37 @@
FROM fedora:39
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-fedora-39-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/fedora-39.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/fedora-39.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

669
release/package/mgbuild.sh Executable file
View File

@ -0,0 +1,669 @@
#!/bin/bash
set -Eeuo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
SCRIPT_NAME=${0##*/}
PROJECT_ROOT="$SCRIPT_DIR/../.."
MGBUILD_HOME_DIR="/home/mg"
MGBUILD_ROOT_DIR="$MGBUILD_HOME_DIR/memgraph"
DEFAULT_TOOLCHAIN="v5"
SUPPORTED_TOOLCHAINS=(
v4 v5
)
DEFAULT_OS="all"
SUPPORTED_OS=(
all
amzn-2
centos-7 centos-9
debian-10 debian-11 debian-11-arm debian-12 debian-12-arm
fedora-36 fedora-38 fedora-39
rocky-9.3
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
SUPPORTED_OS_V4=(
amzn-2
centos-7 centos-9
debian-10 debian-11 debian-11-arm
fedora-36
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
SUPPORTED_OS_V5=(
amzn-2
centos-7 centos-9
debian-11 debian-11-arm debian-12 debian-12-arm
fedora-38 fedora-39
rocky-9.3
ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
)
DEFAULT_BUILD_TYPE="Release"
SUPPORTED_BUILD_TYPES=(
Debug
Release
RelWithDebInfo
)
DEFAULT_ARCH="amd"
SUPPORTED_ARCHS=(
amd
arm
)
SUPPORTED_TESTS=(
clang-tidy cppcheck-and-clang-format code-analysis
code-coverage drivers drivers-high-availability durability e2e gql-behave
integration leftover-CTest macro-benchmark
mgbench stress-plain stress-ssl
unit unit-coverage upload-to-bench-graph
)
DEFAULT_THREADS=0
DEFAULT_ENTERPRISE_LICENSE=""
DEFAULT_ORGANIZATION_NAME="memgraph"
print_help () {
echo -e "\nUsage: $SCRIPT_NAME [GLOBAL OPTIONS] COMMAND [COMMAND OPTIONS]"
echo -e "\nInteract with mgbuild containers"
echo -e "\nCommands:"
echo -e " build Build mgbuild image"
echo -e " build-memgraph [OPTIONS] Build memgraph binary inside mgbuild container"
echo -e " copy OPTIONS Copy an artifact from mgbuild container to host"
echo -e " package-memgraph Create memgraph package from built binary inside mgbuild container"
echo -e " pull Pull mgbuild image from dockerhub"
echo -e " push [OPTIONS] Push mgbuild image to dockerhub"
echo -e " run [OPTIONS] Run mgbuild container"
echo -e " stop [OPTIONS] Stop mgbuild container"
echo -e " test-memgraph TEST Run a selected test TEST (see supported tests below) inside mgbuild container"
echo -e "\nSupported tests:"
echo -e " \"${SUPPORTED_TESTS[*]}\""
echo -e "\nGlobal options:"
echo -e " --arch string Specify target architecture (\"${SUPPORTED_ARCHS[*]}\") (default \"$DEFAULT_ARCH\")"
echo -e " --build-type string Specify build type (\"${SUPPORTED_BUILD_TYPES[*]}\") (default \"$DEFAULT_BUILD_TYPE\")"
echo -e " --enterprise-license string Specify the enterprise license (default \"\")"
echo -e " --organization-name string Specify the organization name (default \"memgraph\")"
echo -e " --os string Specify operating system (\"${SUPPORTED_OS[*]}\") (default \"$DEFAULT_OS\")"
echo -e " --threads int Specify the number of threads a command will use (default \"\$(nproc)\" for container)"
echo -e " --toolchain string Specify toolchain version (\"${SUPPORTED_TOOLCHAINS[*]}\") (default \"$DEFAULT_TOOLCHAIN\")"
echo -e "\nbuild-memgraph options:"
echo -e " --asan Build with ASAN"
echo -e " --community Build community version"
echo -e " --coverage Build with code coverage"
echo -e " --for-docker Add flag -DMG_TELEMETRY_ID_OVERRIDE=DOCKER to cmake"
echo -e " --for-platform Add flag -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM to cmake"
echo -e " --init-only Only run init script"
echo -e " --no-copy Don't copy the memgraph repo from host."
echo -e " Use this option with caution, be sure that memgraph source code is in correct location inside mgbuild container"
echo -e " --ubsan Build with UBSAN"
echo -e "\ncopy options:"
echo -e " --binary Copy memgraph binary from mgbuild container to host"
echo -e " --build-logs Copy build logs from mgbuild container to host"
echo -e " --package Copy memgraph package from mgbuild container to host"
echo -e "\npush options:"
echo -e " -p, --password string Specify password for docker login"
echo -e " -u, --username string Specify username for docker login"
echo -e "\nrun options:"
echo -e " --pull Pull the mgbuild image before running"
echo -e "\nstop options:"
echo -e " --remove Remove the stopped mgbuild container"
echo -e "\nToolchain v4 supported OSs:"
echo -e " \"${SUPPORTED_OS_V4[*]}\""
echo -e "\nToolchain v5 supported OSs:"
echo -e " \"${SUPPORTED_OS_V5[*]}\""
echo -e "\nExample usage:"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd run"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd --build-type RelWithDebInfo build-memgraph --community"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd --build-type RelWithDebInfo test-memgraph unit"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd package"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd copy --package"
echo -e " $SCRIPT_NAME --os debian-12 --toolchain v5 --arch amd stop --remove"
}
check_support() {
local is_supported=false
case "$1" in
arch)
for e in "${SUPPORTED_ARCHS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Architecture $2 isn't supported!\nChoose from ${SUPPORTED_ARCHS[*]}"
exit 1
fi
;;
build_type)
for e in "${SUPPORTED_BUILD_TYPES[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Build type $2 isn't supported!\nChoose from ${SUPPORTED_BUILD_TYPES[*]}"
exit 1
fi
;;
os)
for e in "${SUPPORTED_OS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: OS $2 isn't supported!\nChoose from ${SUPPORTED_OS[*]}"
exit 1
fi
;;
toolchain)
for e in "${SUPPORTED_TOOLCHAINS[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "TError: oolchain version $2 isn't supported!\nChoose from ${SUPPORTED_TOOLCHAINS[*]}"
exit 1
fi
;;
os_toolchain_combo)
if [[ "$3" == "v4" ]]; then
local SUPPORTED_OS_TOOLCHAIN=("${SUPPORTED_OS_V4[@]}")
elif [[ "$3" == "v5" ]]; then
local SUPPORTED_OS_TOOLCHAIN=("${SUPPORTED_OS_V5[@]}")
else
echo -e "Error: $3 isn't a supported toolchain_version!\nChoose from ${SUPPORTED_TOOLCHAINS[*]}"
exit 1
fi
for e in "${SUPPORTED_OS_TOOLCHAIN[@]}"; do
if [[ "$e" == "$2" ]]; then
is_supported=true
break
fi
done
if [[ "$is_supported" == false ]]; then
echo -e "Error: Toolchain version $3 doesn't support OS $2!\nChoose from ${SUPPORTED_OS_TOOLCHAIN[*]}"
exit 1
fi
;;
*)
echo -e "Error: This function can only check arch, build_type, os, toolchain version and os toolchain combination"
exit 1
;;
esac
}
##################################################
######## BUILD, COPY AND PACKAGE MEMGRAPH ########
##################################################
build_memgraph () {
local build_container="mgbuild_${toolchain_version}_${os}"
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_CARGO="source $MGBUILD_HOME_DIR/.cargo/env"
local container_build_dir="$MGBUILD_ROOT_DIR/build"
local container_output_dir="$container_build_dir/output"
local arm_flag=""
if [[ "$arch" == "arm" ]] || [[ "$os" =~ "-arm" ]]; then
arm_flag="-DMG_ARCH="ARM64""
fi
local build_type_flag="-DCMAKE_BUILD_TYPE=$build_type"
local telemetry_id_override_flag=""
local community_flag=""
local coverage_flag=""
local asan_flag=""
local ubsan_flag=""
local init_only=false
local for_docker=false
local for_platform=false
local copy_from_host=true
while [[ "$#" -gt 0 ]]; do
case "$1" in
--community)
community_flag="-DMG_ENTERPRISE=OFF"
shift 1
;;
--init-only)
init_only=true
shift 1
;;
--for-docker)
for_docker=true
if [[ "$for_platform" == "true" ]]; then
echo "Error: Cannot combine --for-docker and --for-platform flags"
exit 1
fi
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER "
shift 1
;;
--for-platform)
for_platform=true
if [[ "$for_docker" == "true" ]]; then
echo "Error: Cannot combine --for-docker and --for-platform flags"
exit 1
fi
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM "
shift 1
;;
--coverage)
coverage_flag="-DTEST_COVERAGE=ON"
shift 1
;;
--asan)
asan_flag="-DASAN=ON"
shift 1
;;
--ubsan)
ubsan_flag="-DUBSAN=ON"
shift 1
;;
--no-copy)
copy_from_host=false
shift 1
;;
*)
echo "Error: Unknown flag '$1'"
exit 1
;;
esac
done
echo "Initializing deps ..."
# If master is not the current branch, fetch it, because the get_version
# script depends on it. If we are on master, the fetch command is going to
# fail so that's why there is the explicit check.
# Required here because Docker build container can't access remote.
cd "$PROJECT_ROOT"
if [[ "$(git rev-parse --abbrev-ref HEAD)" != "master" ]]; then
git fetch origin master:master
fi
if [[ "$copy_from_host" == "true" ]]; then
# Ensure we have a clean build directory
docker exec -u mg "$build_container" bash -c "rm -rf $MGBUILD_ROOT_DIR && mkdir -p $MGBUILD_ROOT_DIR"
echo "Copying project files..."
docker cp "$PROJECT_ROOT/." "$build_container:$MGBUILD_ROOT_DIR/"
fi
# Change ownership of copied files so the mg user inside container can access them
docker exec -u root $build_container bash -c "chown -R mg:mg $MGBUILD_ROOT_DIR"
echo "Installing dependencies using '/memgraph/environment/os/$os.sh' script..."
docker exec -u root "$build_container" bash -c "$MGBUILD_ROOT_DIR/environment/os/$os.sh check TOOLCHAIN_RUN_DEPS || /environment/os/$os.sh install TOOLCHAIN_RUN_DEPS"
docker exec -u root "$build_container" bash -c "$MGBUILD_ROOT_DIR/environment/os/$os.sh check MEMGRAPH_BUILD_DEPS || /environment/os/$os.sh install MEMGRAPH_BUILD_DEPS"
echo "Building targeted package..."
# Fix issue with git marking directory as not safe
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && git config --global --add safe.directory '*'"
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && $ACTIVATE_TOOLCHAIN && ./init --ci"
if [[ "$init_only" == "true" ]]; then
return
fi
echo "Building Memgraph for $os on $build_container..."
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && rm -rf ./*"
# Fix cmake failing locally if remote is clone via ssh
docker exec -u mg "$build_container" bash -c "cd $MGBUILD_ROOT_DIR && git remote set-url origin https://github.com/memgraph/memgraph.git"
# Define cmake command
local cmake_cmd="cmake $build_type_flag $arm_flag $community_flag $telemetry_id_override_flag $coverage_flag $asan_flag $ubsan_flag .."
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO && $cmake_cmd"
# ' is used instead of " because we need to run make within the allowed
# container resources.
# Default value for $threads is 0 instead of $(nproc) because macos
# doesn't support the nproc command.
# 0 is set for default value and checked here because mgbuild containers
# support nproc
# shellcheck disable=SC2016
if [[ "$threads" == 0 ]]; then
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$(nproc)'
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$(nproc) -B mgconsole'
else
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$threads'
docker exec -u mg "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && $ACTIVATE_CARGO "'&& make -j$threads -B mgconsole'
fi
}
package_memgraph() {
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local build_container="mgbuild_${toolchain_version}_${os}"
local container_output_dir="$MGBUILD_ROOT_DIR/build/output"
local package_command=""
if [[ "$os" =~ ^"centos".* ]] || [[ "$os" =~ ^"fedora".* ]] || [[ "$os" =~ ^"amzn".* ]] || [[ "$os" =~ ^"rocky".* ]]; then
docker exec -u root "$build_container" bash -c "yum -y update"
package_command=" cpack -G RPM --config ../CPackConfig.cmake && rpmlint --file='../../release/rpm/rpmlintrc' memgraph*.rpm "
fi
if [[ "$os" =~ ^"debian".* ]]; then
docker exec -u root "$build_container" bash -c "apt --allow-releaseinfo-change -y update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
if [[ "$os" =~ ^"ubuntu".* ]]; then
docker exec -u root "$build_container" bash -c "apt update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
docker exec -u mg "$build_container" bash -c "mkdir -p $container_output_dir && cd $container_output_dir && $ACTIVATE_TOOLCHAIN && $package_command"
}
copy_memgraph() {
local build_container="mgbuild_${toolchain_version}_${os}"
case "$1" in
--binary)
echo "Copying memgraph binary to host..."
local container_output_path="$MGBUILD_ROOT_DIR/build/memgraph"
local host_output_path="$PROJECT_ROOT/build/memgraph"
mkdir -p "$PROJECT_ROOT/build"
docker cp -L $build_container:$container_output_path $host_output_path
echo "Binary saved to $host_output_path"
;;
--build-logs)
echo "Copying memgraph build logs to host..."
local container_output_path="$MGBUILD_ROOT_DIR/build/logs"
local host_output_path="$PROJECT_ROOT/build/logs"
mkdir -p "$PROJECT_ROOT/build"
docker cp -L $build_container:$container_output_path $host_output_path
echo "Build logs saved to $host_output_path"
;;
--package)
echo "Copying memgraph package to host..."
local container_output_dir="$MGBUILD_ROOT_DIR/build/output"
local host_output_dir="$PROJECT_ROOT/build/output/$os"
local last_package_name=$(docker exec -u mg "$build_container" bash -c "cd $container_output_dir && ls -t memgraph* | head -1")
mkdir -p "$host_output_dir"
docker cp "$build_container:$container_output_dir/$last_package_name" "$host_output_dir/$last_package_name"
echo "Package saved to $host_output_dir/$last_package_name"
;;
*)
echo "Error: Unknown flag '$1'"
exit 1
;;
esac
}
##################################################
##################### TESTS ######################
##################################################
test_memgraph() {
local ACTIVATE_TOOLCHAIN="source /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_VENV="./setup.sh /opt/toolchain-${toolchain_version}/activate"
local ACTIVATE_CARGO="source $MGBUILD_HOME_DIR/.cargo/env"
local EXPORT_LICENSE="export MEMGRAPH_ENTERPRISE_LICENSE=$enterprise_license"
local EXPORT_ORG_NAME="export MEMGRAPH_ORGANIZATION_NAME=$organization_name"
local BUILD_DIR="$MGBUILD_ROOT_DIR/build"
local build_container="mgbuild_${toolchain_version}_${os}"
echo "Running $1 test on $build_container..."
case "$1" in
unit)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN "'&& ctest -R memgraph__unit --output-on-failure -j$threads'
;;
unit-coverage)
local setup_lsan_ubsan="export LSAN_OPTIONS=suppressions=$BUILD_DIR/../tools/lsan.supp && export UBSAN_OPTIONS=halt_on_error=1"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN && $setup_lsan_ubsan "'&& ctest -R memgraph__unit --output-on-failure -j2'
;;
leftover-CTest)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $BUILD_DIR && $ACTIVATE_TOOLCHAIN "'&& ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure'
;;
drivers)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& ./tests/drivers/run.sh'
;;
drivers-high-availability)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& ./tests/drivers/run_cluster.sh'
;;
integration)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR "'&& tests/integration/run.sh'
;;
cppcheck-and-clang-format)
local test_output_path="$MGBUILD_ROOT_DIR/tools/github/cppcheck_and_clang_format.txt"
local test_output_host_dest="$PROJECT_ROOT/tools/github/cppcheck_and_clang_format.txt"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tools/github && $ACTIVATE_TOOLCHAIN "'&& ./cppcheck_and_clang_format diff'
docker cp $build_container:$test_output_path $test_output_host_dest
;;
stress-plain)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& ./continuous_integration'
;;
stress-ssl)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& ./continuous_integration --use-ssl'
;;
durability)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/stress && source ve3/bin/activate "'&& python3 durability --num-steps 5'
;;
gql-behave)
local test_output_dir="$MGBUILD_ROOT_DIR/tests/gql_behave"
local test_output_host_dest="$PROJECT_ROOT/tests/gql_behave"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests && $ACTIVATE_VENV && cd $MGBUILD_ROOT_DIR/tests/gql_behave "'&& ./continuous_integration'
docker cp $build_container:$test_output_dir/gql_behave_status.csv $test_output_host_dest/gql_behave_status.csv
docker cp $build_container:$test_output_dir/gql_behave_status.html $test_output_host_dest/gql_behave_status.html
;;
macro-benchmark)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && export USER=mg && export LANG=$(echo $LANG) && cd $MGBUILD_ROOT_DIR/tests/macro_benchmark "'&& ./harness QuerySuite MemgraphRunner --groups aggregation 1000_create unwind_create dense_expand match --no-strict'
;;
mgbench)
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/mgbench "'&& ./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*'
;;
upload-to-bench-graph)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
local SETUP_VE3_ENV="virtualenv -p python3 ve3 && source ve3/bin/activate && pip install -r requirements.txt"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tools/bench-graph-client && $SETUP_VE3_ENV && $SETUP_PASSED_ARGS "'&& ./main.py $PASSED_ARGS'
;;
code-analysis)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && cd $MGBUILD_ROOT_DIR/tests/code_analysis && $SETUP_PASSED_ARGS "'&& ./python_code_analysis.sh $PASSED_ARGS'
;;
code-coverage)
local test_output_path="$MGBUILD_ROOT_DIR/tools/github/generated/code_coverage.tar.gz"
local test_output_host_dest="$PROJECT_ROOT/tools/github/generated/code_coverage.tar.gz"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && $ACTIVATE_TOOLCHAIN && cd $MGBUILD_ROOT_DIR/tools/github "'&& ./coverage_convert'
docker exec -u mg $build_container bash -c "cd $MGBUILD_ROOT_DIR/tools/github/generated && tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu"
mkdir -p $PROJECT_ROOT/tools/github/generated
docker cp $build_container:$test_output_path $test_output_host_dest
;;
clang-tidy)
shift 1
local SETUP_PASSED_ARGS="export PASSED_ARGS=\"$@\""
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && export THREADS=$threads && $ACTIVATE_TOOLCHAIN && cd $MGBUILD_ROOT_DIR/tests/code_analysis && $SETUP_PASSED_ARGS "'&& ./clang_tidy.sh $PASSED_ARGS'
;;
e2e)
# local kafka_container="kafka_kafka_1"
# local kafka_hostname="kafka"
# local pulsar_container="pulsar_pulsar_1"
# local pulsar_hostname="pulsar"
# local setup_hostnames="export KAFKA_HOSTNAME=$kafka_hostname && PULSAR_HOSTNAME=$pulsar_hostname"
# local build_container_network=$(docker inspect $build_container --format='{{ .HostConfig.NetworkMode }}')
# docker network connect --alias $kafka_hostname $build_container_network $kafka_container > /dev/null 2>&1 || echo "Kafka container already inside correct network or something went wrong ..."
# docker network connect --alias $pulsar_hostname $build_container_network $pulsar_container > /dev/null 2>&1 || echo "Kafka container already inside correct network or something went wrong ..."
docker exec -u mg $build_container bash -c "pip install --user networkx && pip3 install --user networkx"
docker exec -u mg $build_container bash -c "$EXPORT_LICENSE && $EXPORT_ORG_NAME && $ACTIVATE_CARGO && cd $MGBUILD_ROOT_DIR/tests && $ACTIVATE_VENV && source ve3/bin/activate_e2e && cd $MGBUILD_ROOT_DIR/tests/e2e "'&& ./run.sh'
;;
*)
echo "Error: Unknown test '$1'"
exit 1
;;
esac
}
##################################################
################### PARSE ARGS ###################
##################################################
if [ "$#" -eq 0 ] || [ "$1" == "-h" ] || [ "$1" == "--help" ]; then
print_help
exit 0
fi
arch=$DEFAULT_ARCH
build_type=$DEFAULT_BUILD_TYPE
enterprise_license=$DEFAULT_ENTERPRISE_LICENSE
organization_name=$DEFAULT_ORGANIZATION_NAME
os=$DEFAULT_OS
threads=$DEFAULT_THREADS
toolchain_version=$DEFAULT_TOOLCHAIN
command=""
while [[ $# -gt 0 ]]; do
case "$1" in
--arch)
arch=$2
check_support arch $arch
shift 2
;;
--build-type)
build_type=$2
check_support build_type $build_type
shift 2
;;
--enterprise-license)
enterprise_license=$2
shift 2
;;
--organization-name)
organization_name=$2
shift 2
;;
--os)
os=$2
check_support os $os
shift 2
;;
--threads)
threads=$2
shift 2
;;
--toolchain)
toolchain_version=$2
check_support toolchain $toolchain_version
shift 2
;;
*)
if [[ "$1" =~ ^--.* ]]; then
echo -e "Error: Unknown option '$1'"
exit 1
else
command=$1
shift 1
break
fi
;;
esac
done
check_support os_toolchain_combo $os $toolchain_version
if [[ "$command" == "" ]]; then
echo -e "Error: Command not provided, please provide command"
print_help
exit 1
fi
if docker compose version > /dev/null 2>&1; then
docker_compose_cmd="docker compose"
elif which docker-compose > /dev/null 2>&1; then
docker_compose_cmd="docker-compose"
else
echo -e "Missing command: There has to be installed either 'docker-compose' or 'docker compose'"
exit 1
fi
echo "Using $docker_compose_cmd"
##################################################
################# PARSE COMMAND ##################
##################################################
case $command in
build)
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml build
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml build mgbuild_${toolchain_version}_${os}
fi
;;
run)
cd $SCRIPT_DIR
pull=false
if [[ "$#" -gt 0 ]]; then
if [[ "$1" == "--pull" ]]; then
pull=true
else
echo "Error: Unknown flag '$1'"
exit 1
fi
fi
if [[ "$os" == "all" ]]; then
if [[ "$pull" == "true" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures
elif [[ "$docker_compose_cmd" == "docker compose" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures --policy missing
fi
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml up -d
else
if [[ "$pull" == "true" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull mgbuild_${toolchain_version}_${os}
elif ! docker image inspect memgraph/mgbuild:${toolchain_version}_${os} > /dev/null 2>&1; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures mgbuild_${toolchain_version}_${os}
fi
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml up -d mgbuild_${toolchain_version}_${os}
fi
;;
stop)
cd $SCRIPT_DIR
remove=false
if [[ "$#" -gt 0 ]]; then
if [[ "$1" == "--remove" ]]; then
remove=true
else
echo "Error: Unknown flag '$1'"
exit 1
fi
fi
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml down
else
docker stop mgbuild_${toolchain_version}_${os}
if [[ "$remove" == "true" ]]; then
docker rm mgbuild_${toolchain_version}_${os}
fi
fi
;;
pull)
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull --ignore-pull-failures
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml pull mgbuild_${toolchain_version}_${os}
fi
;;
push)
docker login $@
cd $SCRIPT_DIR
if [[ "$os" == "all" ]]; then
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml push --ignore-push-failures
else
$docker_compose_cmd -f ${arch}-builders-${toolchain_version}.yml push mgbuild_${toolchain_version}_${os}
fi
;;
build-memgraph)
build_memgraph $@
;;
package-memgraph)
package_memgraph
;;
test-memgraph)
test_memgraph $@
;;
copy)
copy_memgraph $@
;;
*)
echo "Error: Unknown command '$command'"
exit 1
;;
esac

View File

@ -0,0 +1,40 @@
FROM rockylinux:9.3
ARG TOOLCHAIN_VERSION
# Stops tzdata interactive configuration.
RUN yum -y update \
&& yum install -y wget git
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-rocky-9.3-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/rocky-9.3.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/rocky-9.3.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# Install PyYAML (only for amzn-2, centos-7, cento-9 and rocky-9.3)
RUN pip3 install --user PyYAML
ENTRYPOINT ["sleep", "infinity"]

View File

@ -1,208 +0,0 @@
#!/bin/bash
set -Eeuo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
SUPPORTED_OS=(
centos-7 centos-9
debian-10 debian-11 debian-11-arm
ubuntu-18.04 ubuntu-20.04 ubuntu-22.04 ubuntu-22.04-arm
fedora-36
amzn-2
)
SUPPORTED_BUILD_TYPES=(
Debug
Release
RelWithDebInfo
)
PROJECT_ROOT="$SCRIPT_DIR/../.."
TOOLCHAIN_VERSION="toolchain-v4"
ACTIVATE_TOOLCHAIN="source /opt/${TOOLCHAIN_VERSION}/activate"
HOST_OUTPUT_DIR="$PROJECT_ROOT/build/output"
print_help () {
# TODO(gitbuda): Update the release/package/run.sh help
echo "$0 init|package|docker|test {os} {build_type} [--for-docker|--for-platform]"
echo ""
echo " OSs: ${SUPPORTED_OS[*]}"
echo " Build types: ${SUPPORTED_BUILD_TYPES[*]}"
exit 1
}
make_package () {
os="$1"
build_type="$2"
build_container="mgbuild_$os"
echo "Building Memgraph for $os on $build_container..."
package_command=""
if [[ "$os" =~ ^"centos".* ]] || [[ "$os" =~ ^"fedora".* ]] || [[ "$os" =~ ^"amzn".* ]]; then
docker exec "$build_container" bash -c "yum -y update"
package_command=" cpack -G RPM --config ../CPackConfig.cmake && rpmlint --file='../../release/rpm/rpmlintrc' memgraph*.rpm "
fi
if [[ "$os" =~ ^"debian".* ]]; then
docker exec "$build_container" bash -c "apt --allow-releaseinfo-change -y update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
if [[ "$os" =~ ^"ubuntu".* ]]; then
docker exec "$build_container" bash -c "apt update"
package_command=" cpack -G DEB --config ../CPackConfig.cmake "
fi
telemetry_id_override_flag=""
if [[ "$#" -gt 2 ]]; then
if [[ "$3" == "--for-docker" ]]; then
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER "
elif [[ "$3" == "--for-platform" ]]; then
telemetry_id_override_flag=" -DMG_TELEMETRY_ID_OVERRIDE=DOCKER-PLATFORM"
else
print_help
exit
fi
fi
echo "Copying project files..."
# If master is not the current branch, fetch it, because the get_version
# script depends on it. If we are on master, the fetch command is going to
# fail so that's why there is the explicit check.
# Required here because Docker build container can't access remote.
cd "$PROJECT_ROOT"
if [[ "$(git rev-parse --abbrev-ref HEAD)" != "master" ]]; then
git fetch origin master:master
fi
# Ensure we have a clean build directory
docker exec "$build_container" rm -rf /memgraph
docker exec "$build_container" mkdir -p /memgraph
# TODO(gitbuda): Revisit copying the whole repo -> makese sense under CI.
docker cp "$PROJECT_ROOT/." "$build_container:/memgraph/"
container_build_dir="/memgraph/build"
container_output_dir="$container_build_dir/output"
# TODO(gitbuda): TOOLCHAIN_RUN_DEPS should be installed during the Docker
# image build phase, but that is not easy at this point because the
# environment/os/{os}.sh does not come within the toolchain package. When
# migrating to the next version of toolchain do that, and remove the
# TOOLCHAIN_RUN_DEPS installation from here.
# TODO(gitbuda): On the other side, having this here allows updating deps
# wihout reruning the build containers.
echo "Installing dependencies using '/memgraph/environment/os/$os.sh' script..."
docker exec "$build_container" bash -c "/memgraph/environment/os/$os.sh install TOOLCHAIN_RUN_DEPS"
docker exec "$build_container" bash -c "/memgraph/environment/os/$os.sh install MEMGRAPH_BUILD_DEPS"
echo "Building targeted package..."
# Fix issue with git marking directory as not safe
docker exec "$build_container" bash -c "cd /memgraph && git config --global --add safe.directory '*'"
docker exec "$build_container" bash -c "cd /memgraph && $ACTIVATE_TOOLCHAIN && ./init"
docker exec "$build_container" bash -c "cd $container_build_dir && rm -rf ./*"
# TODO(gitbuda): cmake fails locally if remote is clone via ssh because of the key -> FIX
if [[ "$os" =~ "-arm" ]]; then
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && cmake -DCMAKE_BUILD_TYPE=$build_type -DMG_ARCH="ARM64" $telemetry_id_override_flag .."
else
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN && cmake -DCMAKE_BUILD_TYPE=$build_type $telemetry_id_override_flag .."
fi
# ' is used instead of " because we need to run make within the allowed
# container resources.
# shellcheck disable=SC2016
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN "'&& make -j$(nproc)'
docker exec "$build_container" bash -c "cd $container_build_dir && $ACTIVATE_TOOLCHAIN "'&& make -j$(nproc) -B mgconsole'
docker exec "$build_container" bash -c "mkdir -p $container_output_dir && cd $container_output_dir && $ACTIVATE_TOOLCHAIN && $package_command"
echo "Copying targeted package to host..."
last_package_name=$(docker exec "$build_container" bash -c "cd $container_output_dir && ls -t memgraph* | head -1")
# The operating system folder is introduced because multiple different
# packages could be preserved during the same build "session".
mkdir -p "$HOST_OUTPUT_DIR/$os"
package_host_destination="$HOST_OUTPUT_DIR/$os/$last_package_name"
docker cp "$build_container:$container_output_dir/$last_package_name" "$package_host_destination"
echo "Package saved to $package_host_destination."
}
case "$1" in
init)
cd "$SCRIPT_DIR"
if ! which "docker-compose" >/dev/null; then
docker_compose_cmd="docker compose"
else
docker_compose_cmd="docker-compose"
fi
$docker_compose_cmd build --build-arg TOOLCHAIN_VERSION="${TOOLCHAIN_VERSION}"
$docker_compose_cmd up -d
;;
docker)
# NOTE: Docker is build on top of Debian 11 package.
based_on_os="debian-11"
# shellcheck disable=SC2012
last_package_name=$(cd "$HOST_OUTPUT_DIR/$based_on_os" && ls -t memgraph* | head -1)
docker_build_folder="$PROJECT_ROOT/release/docker"
cd "$docker_build_folder"
./package_docker --latest "$HOST_OUTPUT_DIR/$based_on_os/$last_package_name"
# shellcheck disable=SC2012
docker_image_name=$(cd "$docker_build_folder" && ls -t memgraph* | head -1)
docker_host_folder="$HOST_OUTPUT_DIR/docker"
docker_host_image_path="$docker_host_folder/$docker_image_name"
mkdir -p "$docker_host_folder"
cp "$docker_build_folder/$docker_image_name" "$docker_host_image_path"
echo "Docker images saved to $docker_host_image_path."
;;
package)
shift 1
if [[ "$#" -lt 2 ]]; then
print_help
fi
os="$1"
build_type="$2"
shift 2
is_os_ok=false
for supported_os in "${SUPPORTED_OS[@]}"; do
if [[ "$supported_os" == "${os}" ]]; then
is_os_ok=true
break
fi
done
is_build_type_ok=false
for supported_build_type in "${SUPPORTED_BUILD_TYPES[@]}"; do
if [[ "$supported_build_type" == "${build_type}" ]]; then
is_build_type_ok=true
break
fi
done
if [[ "$is_os_ok" == true && "$is_build_type_ok" == true ]]; then
make_package "$os" "$build_type" "$@"
else
if [[ "$is_os_ok" == false ]]; then
echo "Unsupported OS: $os"
elif [[ "$is_build_type_ok" == false ]]; then
echo "Unsupported build type: $build_type"
fi
print_help
fi
;;
build)
shift 1
if [[ "$#" -ne 2 ]]; then
print_help
fi
# in the vX format, e.g. v5
toolchain_version="$1"
# a name of the os folder, e.g. ubuntu-22.04-arm
os="$2"
cd "$SCRIPT_DIR/$os"
docker build -f Dockerfile --build-arg TOOLCHAIN_VERSION="toolchain-$toolchain_version" -t "memgraph/memgraph-builder:${toolchain_version}_$os" .
;;
test)
echo "TODO(gitbuda): Test all packages on mgtest containers."
;;
*)
print_help
;;
esac

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-18.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-18.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-18.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-20.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-20.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-20.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-arm64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-22.04-arm.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-22.04-arm.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -10,9 +10,30 @@ RUN apt update && apt install -y \
# Do NOT be smart here and clean the cache because the container is used in the
# stateful context.
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/${TOOLCHAIN_VERSION}/${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
-O ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
&& tar xzvf ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz -C /opt \
&& rm ${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz
RUN wget -q https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/toolchain-${TOOLCHAIN_VERSION}/toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
-O toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz \
&& tar xzvf toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz -C /opt \
&& rm toolchain-${TOOLCHAIN_VERSION}-binaries-ubuntu-22.04-amd64.tar.gz
# Install toolchain run deps and memgraph build deps
SHELL ["/bin/bash", "-c"]
RUN git clone https://github.com/memgraph/memgraph.git \
&& cd memgraph \
&& ./environment/os/ubuntu-22.04.sh install TOOLCHAIN_RUN_DEPS \
&& ./environment/os/ubuntu-22.04.sh install MEMGRAPH_BUILD_DEPS \
&& cd .. && rm -rf memgraph
# Add mgdeps-cache and bench-graph-api hostnames
RUN echo -e "10.42.16.10 mgdeps-cache\n10.42.16.10 bench-graph-api" >> /etc/hosts
# Create mg user and set as default
RUN useradd -m -s /bin/bash mg
USER mg
# Install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
# Fix node
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
ENTRYPOINT ["sleep", "infinity"]

View File

@ -45,7 +45,7 @@ set(mg_single_node_v2_sources
add_executable(memgraph ${mg_single_node_v2_sources})
target_include_directories(memgraph PUBLIC ${CMAKE_SOURCE_DIR}/include)
target_link_libraries(memgraph stdc++fs Threads::Threads
mg-telemetry mg-communication mg-communication-metrics mg-memory mg-utils mg-license mg-settings mg-glue mg-flags mg::system mg::replication_handler)
mg-telemetry mgcxx_text_search tantivy_text_search mg-communication mg-communication-metrics mg-memory mg-utils mg-license mg-settings mg-glue mg-flags mg::system mg::replication_handler)
# NOTE: `include/mg_procedure.syms` describes a pattern match for symbols which
# should be dynamically exported, so that `dlopen` can correctly link th

View File

@ -35,16 +35,42 @@ DEFINE_VALIDATED_string(auth_module_executable, "", "Absolute path to the auth m
}
return true;
});
DEFINE_bool(auth_module_create_missing_user, true, "Set to false to disable creation of missing users.");
DEFINE_bool(auth_module_create_missing_role, true, "Set to false to disable creation of missing roles.");
DEFINE_bool(auth_module_manage_roles, true, "Set to false to disable management of roles through the auth module.");
DEFINE_VALIDATED_int32(auth_module_timeout_ms, 10000,
"Timeout (in milliseconds) used when waiting for a "
"response from the auth module.",
FLAG_IN_RANGE(100, 1800000));
// DEPRECATED FLAGS
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(auth_module_create_missing_user, true,
"Set to false to disable creation of missing users.", {
spdlog::warn(
"auth_module_create_missing_user flag is deprecated. It not possible to create "
"users through the module anymore.");
return true;
});
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(auth_module_create_missing_role, true,
"Set to false to disable creation of missing roles.", {
spdlog::warn(
"auth_module_create_missing_role flag is deprecated. It not possible to create "
"roles through the module anymore.");
return true;
});
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables, misc-unused-parameters)
DEFINE_VALIDATED_HIDDEN_bool(
auth_module_manage_roles, true, "Set to false to disable management of roles through the auth module.", {
spdlog::warn(
"auth_module_manage_roles flag is deprecated. It not possible to create roles through the module anymore.");
return true;
});
namespace memgraph::auth {
const Auth::Epoch Auth::kStartEpoch = 1;
namespace {
#ifdef MG_ENTERPRISE
/**
@ -57,16 +83,19 @@ struct UpdateAuthData : memgraph::system::ISystemAction {
void DoDurability() override { /* Done during Auth execution */
}
bool DoReplication(replication::ReplicationClient &client, replication::ReplicationEpoch const &epoch,
bool DoReplication(replication::ReplicationClient &client, const utils::UUID &main_uuid,
replication::ReplicationEpoch const &epoch,
memgraph::system::Transaction const &txn) const override {
auto check_response = [](const replication::UpdateAuthDataRes &response) { return response.success; };
if (user_) {
return client.SteamAndFinalizeDelta<replication::UpdateAuthDataRpc>(
check_response, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(), *user_);
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
*user_);
}
if (role_) {
return client.SteamAndFinalizeDelta<replication::UpdateAuthDataRpc>(
check_response, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(), *role_);
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
*role_);
}
// Should never get here
MG_ASSERT(false, "Trying to update auth data that is not a user nor a role");
@ -88,7 +117,8 @@ struct DropAuthData : memgraph::system::ISystemAction {
void DoDurability() override { /* Done during Auth execution */
}
bool DoReplication(replication::ReplicationClient &client, replication::ReplicationEpoch const &epoch,
bool DoReplication(replication::ReplicationClient &client, const utils::UUID &main_uuid,
replication::ReplicationEpoch const &epoch,
memgraph::system::Transaction const &txn) const override {
auto check_response = [](const replication::DropAuthDataRes &response) { return response.success; };
@ -102,7 +132,8 @@ struct DropAuthData : memgraph::system::ISystemAction {
break;
}
return client.SteamAndFinalizeDelta<replication::DropAuthDataRpc>(
check_response, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(), type, name_);
check_response, main_uuid, std::string{epoch.id()}, txn.last_committed_system_timestamp(), txn.timestamp(),
type, name_);
}
void PostReplication(replication::RoleMainData &mainData) const override {}
@ -187,6 +218,17 @@ void MigrateVersions(kvstore::KVStore &store) {
version_str = kVersionV1;
}
}
auto ParseJson(std::string_view str) {
nlohmann::json data;
try {
data = nlohmann::json::parse(str);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load auth data!");
}
return data;
}
}; // namespace
Auth::Auth(std::string storage_directory, Config config)
@ -194,8 +236,11 @@ Auth::Auth(std::string storage_directory, Config config)
MigrateVersions(storage_);
}
std::optional<User> Auth::Authenticate(const std::string &username, const std::string &password) {
std::optional<UserOrRole> Auth::Authenticate(const std::string &username, const std::string &password) {
if (module_.IsUsed()) {
/*
* MODULE AUTH STORAGE
*/
const auto license_check_result = license::global_license_checker.IsEnterpriseValid(utils::global_settings);
if (license_check_result.HasError()) {
spdlog::warn(license::LicenseCheckErrorToString(license_check_result.GetError(), "authentication modules"));
@ -220,108 +265,64 @@ std::optional<User> Auth::Authenticate(const std::string &username, const std::s
auto is_authenticated = ret_authenticated.get<bool>();
const auto &rolename = ret_role.get<std::string>();
// Check if role is present
auto role = GetRole(rolename);
if (!role) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the role '{}' doesn't exist.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
// Authenticate the user.
if (!is_authenticated) return std::nullopt;
/**
* TODO
* The auth module should not update auth data.
* There is now way to replicate it and we should not be storing sensitive data if we don't have to.
*/
// Find or create the user and return it.
auto user = GetUser(username);
if (!user) {
if (FLAGS_auth_module_create_missing_user) {
user = AddUser(username, password);
if (!user) {
spdlog::warn(utils::MessageWithLink(
"Couldn't create the missing user '{}' using the auth module because the user already exists as a role.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
} else {
spdlog::warn(utils::MessageWithLink(
"Couldn't authenticate user '{}' using the auth module because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
} else {
UpdatePassword(*user, password);
}
if (FLAGS_auth_module_manage_roles) {
if (!rolename.empty()) {
auto role = GetRole(rolename);
if (!role) {
if (FLAGS_auth_module_create_missing_role) {
role = AddRole(rolename);
if (!role) {
spdlog::warn(
utils::MessageWithLink("Couldn't authenticate user '{}' using the auth module because the user's "
"role '{}' already exists as a user.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
SaveRole(*role);
} else {
spdlog::warn(utils::MessageWithLink(
"Couldn't authenticate user '{}' using the auth module because the user's role '{}' doesn't exist.",
username, rolename, "https://memgr.ph/auth"));
return std::nullopt;
}
}
user->SetRole(*role);
} else {
user->ClearRole();
}
}
SaveUser(*user);
return user;
} else {
auto user = GetUser(username);
if (!user) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
if (!user->CheckPassword(password)) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the password is not correct.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
if (user->UpgradeHash(password)) {
SaveUser(*user);
}
return user;
return RoleWUsername{username, std::move(*role)};
}
/*
* LOCAL AUTH STORAGE
*/
auto user = GetUser(username);
if (!user) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the user doesn't exist.", username,
"https://memgr.ph/auth"));
return std::nullopt;
}
if (!user->CheckPassword(password)) {
spdlog::warn(utils::MessageWithLink("Couldn't authenticate user '{}' because the password is not correct.",
username, "https://memgr.ph/auth"));
return std::nullopt;
}
if (user->UpgradeHash(password)) {
SaveUser(*user);
}
return user;
}
std::optional<User> Auth::GetUser(const std::string &username_orig) const {
auto username = utils::ToLowerCase(username_orig);
auto existing_user = storage_.Get(kUserPrefix + username);
if (!existing_user) return std::nullopt;
nlohmann::json data;
try {
data = nlohmann::json::parse(*existing_user);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load user data!");
}
auto user = User::Deserialize(data);
auto link = storage_.Get(kLinkPrefix + username);
void Auth::LinkUser(User &user) const {
auto link = storage_.Get(kLinkPrefix + user.username());
if (link) {
auto role = GetRole(*link);
if (role) {
user.SetRole(*role);
}
}
}
std::optional<User> Auth::GetUser(const std::string &username_orig) const {
if (module_.IsUsed()) return std::nullopt; // User's are not supported when using module
auto username = utils::ToLowerCase(username_orig);
auto existing_user = storage_.Get(kUserPrefix + username);
if (!existing_user) return std::nullopt;
auto user = User::Deserialize(ParseJson(*existing_user));
LinkUser(user);
return user;
}
void Auth::SaveUser(const User &user, system::Transaction *system_tx) {
DisableIfModuleUsed();
bool success = false;
if (const auto *role = user.role(); role != nullptr) {
success = storage_.PutMultiple(
@ -333,6 +334,10 @@ void Auth::SaveUser(const User &user, system::Transaction *system_tx) {
if (!success) {
throw AuthException("Couldn't save user '{}'!", user.username());
}
// Durability updated -> new epoch
UpdateEpoch();
// All changes to the user end up calling this function, so no need to add a delta anywhere else
if (system_tx) {
#ifdef MG_ENTERPRISE
@ -342,6 +347,7 @@ void Auth::SaveUser(const User &user, system::Transaction *system_tx) {
}
void Auth::UpdatePassword(auth::User &user, const std::optional<std::string> &password) {
DisableIfModuleUsed();
// Check if null
if (!password) {
if (!config_.password_permit_null) {
@ -373,6 +379,7 @@ void Auth::UpdatePassword(auth::User &user, const std::optional<std::string> &pa
std::optional<User> Auth::AddUser(const std::string &username, const std::optional<std::string> &password,
system::Transaction *system_tx) {
DisableIfModuleUsed();
if (!NameRegexMatch(username)) {
throw AuthException("Invalid user name.");
}
@ -387,12 +394,17 @@ std::optional<User> Auth::AddUser(const std::string &username, const std::option
}
bool Auth::RemoveUser(const std::string &username_orig, system::Transaction *system_tx) {
DisableIfModuleUsed();
auto username = utils::ToLowerCase(username_orig);
if (!storage_.Get(kUserPrefix + username)) return false;
std::vector<std::string> keys({kLinkPrefix + username, kUserPrefix + username});
if (!storage_.DeleteMultiple(keys)) {
throw AuthException("Couldn't remove user '{}'!", username);
}
// Durability updated -> new epoch
UpdateEpoch();
// Handling drop user delta
if (system_tx) {
#ifdef MG_ENTERPRISE
@ -407,9 +419,12 @@ std::vector<auth::User> Auth::AllUsers() const {
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (username != utils::ToLowerCase(username)) continue;
auto user = GetUser(username);
if (user) {
ret.push_back(std::move(*user));
try {
User user = auth::User::Deserialize(ParseJson(it->second)); // Will throw on failure
LinkUser(user);
ret.emplace_back(std::move(user));
} catch (AuthException &) {
continue;
}
}
return ret;
@ -420,9 +435,12 @@ std::vector<std::string> Auth::AllUsernames() const {
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (username != utils::ToLowerCase(username)) continue;
auto user = GetUser(username);
if (user) {
ret.push_back(username);
try {
// Check if serialized correctly
memgraph::auth::User::Deserialize(ParseJson(it->second)); // Will throw on failure
ret.emplace_back(std::move(username));
} catch (AuthException &) {
continue;
}
}
return ret;
@ -430,25 +448,24 @@ std::vector<std::string> Auth::AllUsernames() const {
bool Auth::HasUsers() const { return storage_.begin(kUserPrefix) != storage_.end(kUserPrefix); }
bool Auth::AccessControlled() const { return HasUsers() || module_.IsUsed(); }
std::optional<Role> Auth::GetRole(const std::string &rolename_orig) const {
auto rolename = utils::ToLowerCase(rolename_orig);
auto existing_role = storage_.Get(kRolePrefix + rolename);
if (!existing_role) return std::nullopt;
nlohmann::json data;
try {
data = nlohmann::json::parse(*existing_role);
} catch (const nlohmann::json::parse_error &e) {
throw AuthException("Couldn't load role data!");
}
return Role::Deserialize(data);
return Role::Deserialize(ParseJson(*existing_role));
}
void Auth::SaveRole(const Role &role, system::Transaction *system_tx) {
if (!storage_.Put(kRolePrefix + role.rolename(), role.Serialize().dump())) {
throw AuthException("Couldn't save role '{}'!", role.rolename());
}
// Durability updated -> new epoch
UpdateEpoch();
// All changes to the role end up calling this function, so no need to add a delta anywhere else
if (system_tx) {
#ifdef MG_ENTERPRISE
@ -481,6 +498,10 @@ bool Auth::RemoveRole(const std::string &rolename_orig, system::Transaction *sys
if (!storage_.DeleteMultiple(keys)) {
throw AuthException("Couldn't remove role '{}'!", rolename);
}
// Durability updated -> new epoch
UpdateEpoch();
// Handling drop role delta
if (system_tx) {
#ifdef MG_ENTERPRISE
@ -495,11 +516,8 @@ std::vector<auth::Role> Auth::AllRoles() const {
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
if (rolename != utils::ToLowerCase(rolename)) continue;
if (auto role = GetRole(rolename)) {
ret.push_back(*role);
} else {
throw AuthException("Couldn't load role '{}'!", rolename);
}
Role role = memgraph::auth::Role::Deserialize(ParseJson(it->second)); // Will throw on failure
ret.emplace_back(std::move(role));
}
return ret;
}
@ -509,14 +527,19 @@ std::vector<std::string> Auth::AllRolenames() const {
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
if (rolename != utils::ToLowerCase(rolename)) continue;
if (auto role = GetRole(rolename)) {
ret.push_back(rolename);
try {
// Check that the data is serialized correctly
memgraph::auth::Role::Deserialize(ParseJson(it->second));
ret.emplace_back(std::move(rolename));
} catch (AuthException &) {
continue;
}
}
return ret;
}
std::vector<auth::User> Auth::AllUsersForRole(const std::string &rolename_orig) const {
DisableIfModuleUsed();
const auto rolename = utils::ToLowerCase(rolename_orig);
std::vector<auth::User> ret;
for (auto it = storage_.begin(kLinkPrefix); it != storage_.end(kLinkPrefix); ++it) {
@ -535,51 +558,176 @@ std::vector<auth::User> Auth::AllUsersForRole(const std::string &rolename_orig)
}
#ifdef MG_ENTERPRISE
bool Auth::GrantDatabaseToUser(const std::string &db, const std::string &name, system::Transaction *system_tx) {
if (auto user = GetUser(name)) {
if (db == kAllDatabases) {
user->db_access().GrantAll();
} else {
user->db_access().Add(db);
Auth::Result Auth::GrantDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
GrantDatabase(db, *role, system_tx);
return SUCCESS;
}
SaveUser(*user, system_tx);
return true;
return NO_ROLE;
}
return false;
if (auto user = GetUser(name)) {
GrantDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
GrantDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
bool Auth::RevokeDatabaseFromUser(const std::string &db, const std::string &name, system::Transaction *system_tx) {
if (auto user = GetUser(name)) {
if (db == kAllDatabases) {
user->db_access().DenyAll();
} else {
user->db_access().Remove(db);
}
SaveUser(*user, system_tx);
return true;
void Auth::GrantDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().GrantAll();
} else {
user.db_access().Grant(db);
}
return false;
SaveUser(user, system_tx);
}
void Auth::GrantDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().GrantAll();
} else {
role.db_access().Grant(db);
}
SaveRole(role, system_tx);
}
Auth::Result Auth::DenyDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
DenyDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_ROLE;
}
if (auto user = GetUser(name)) {
DenyDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
DenyDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::DenyDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().DenyAll();
} else {
user.db_access().Deny(db);
}
SaveUser(user, system_tx);
}
void Auth::DenyDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().DenyAll();
} else {
role.db_access().Deny(db);
}
SaveRole(role, system_tx);
}
Auth::Result Auth::RevokeDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
RevokeDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_ROLE;
}
if (auto user = GetUser(name)) {
RevokeDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
RevokeDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::RevokeDatabase(const std::string &db, User &user, system::Transaction *system_tx) {
if (db == kAllDatabases) {
user.db_access().RevokeAll();
} else {
user.db_access().Revoke(db);
}
SaveUser(user, system_tx);
}
void Auth::RevokeDatabase(const std::string &db, Role &role, system::Transaction *system_tx) {
if (db == kAllDatabases) {
role.db_access().RevokeAll();
} else {
role.db_access().Revoke(db);
}
SaveRole(role, system_tx);
}
void Auth::DeleteDatabase(const std::string &db, system::Transaction *system_tx) {
for (auto it = storage_.begin(kUserPrefix); it != storage_.end(kUserPrefix); ++it) {
auto username = it->first.substr(kUserPrefix.size());
if (auto user = GetUser(username)) {
user->db_access().Delete(db);
SaveUser(*user, system_tx);
try {
User user = auth::User::Deserialize(ParseJson(it->second));
LinkUser(user);
user.db_access().Revoke(db);
SaveUser(user, system_tx);
} catch (AuthException &) {
continue;
}
}
for (auto it = storage_.begin(kRolePrefix); it != storage_.end(kRolePrefix); ++it) {
auto rolename = it->first.substr(kRolePrefix.size());
try {
auto role = memgraph::auth::Role::Deserialize(ParseJson(it->second));
role.db_access().Revoke(db);
SaveRole(role, system_tx);
} catch (AuthException &) {
continue;
}
}
}
bool Auth::SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx) {
if (auto user = GetUser(name)) {
if (!user->db_access().SetDefault(db)) {
throw AuthException("Couldn't set default database '{}' for user '{}'!", db, name);
Auth::Result Auth::SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx) {
using enum Auth::Result;
if (module_.IsUsed()) {
if (auto role = GetRole(name)) {
SetMainDatabase(db, *role, system_tx);
return SUCCESS;
}
SaveUser(*user, system_tx);
return true;
return NO_ROLE;
}
return false;
if (auto user = GetUser(name)) {
SetMainDatabase(db, *user, system_tx);
return SUCCESS;
}
if (auto role = GetRole(name)) {
SetMainDatabase(db, *role, system_tx);
return SUCCESS;
}
return NO_USER_ROLE;
}
void Auth::SetMainDatabase(std::string_view db, User &user, system::Transaction *system_tx) {
if (!user.db_access().SetMain(db)) {
throw AuthException("Couldn't set default database '{}' for '{}'!", db, user.username());
}
SaveUser(user, system_tx);
}
void Auth::SetMainDatabase(std::string_view db, Role &role, system::Transaction *system_tx) {
if (!role.db_access().SetMain(db)) {
throw AuthException("Couldn't set default database '{}' for '{}'!", db, role.rolename());
}
SaveRole(role, system_tx);
}
#endif

View File

@ -29,6 +29,18 @@ using SynchedAuth = memgraph::utils::Synchronized<memgraph::auth::Auth, memgraph
static const constexpr char *const kAllDatabases = "*";
struct RoleWUsername : Role {
template <typename... Args>
RoleWUsername(std::string_view username, Args &&...args) : Role{std::forward<Args>(args)...}, username_{username} {}
std::string username() { return username_; }
const std::string &username() const { return username_; }
private:
std::string username_;
};
using UserOrRole = std::variant<User, RoleWUsername>;
/**
* This class serves as the main Authentication/Authorization storage.
* It provides functions for managing Users, Roles, Permissions and FineGrainedAccessPermissions.
@ -61,6 +73,25 @@ class Auth final {
std::regex password_regex{password_regex_str};
};
struct Epoch {
Epoch() : epoch_{0} {}
Epoch(unsigned e) : epoch_{e} {}
Epoch operator++() { return ++epoch_; }
bool operator==(const Epoch &rhs) const = default;
private:
unsigned epoch_;
};
static const Epoch kStartEpoch;
enum class Result {
SUCCESS,
NO_USER_ROLE,
NO_ROLE,
};
explicit Auth(std::string storage_directory, Config config);
/**
@ -89,7 +120,7 @@ class Auth final {
* @return a user when the username and password match, nullopt otherwise
* @throw AuthException if unable to authenticate for whatever reason.
*/
std::optional<User> Authenticate(const std::string &username, const std::string &password);
std::optional<UserOrRole> Authenticate(const std::string &username, const std::string &password);
/**
* Gets a user from the storage.
@ -101,6 +132,8 @@ class Auth final {
*/
std::optional<User> GetUser(const std::string &username) const;
void LinkUser(User &user) const;
/**
* Saves a user object to the storage.
*
@ -163,6 +196,13 @@ class Auth final {
*/
bool HasUsers() const;
/**
* Returns whether the access is controlled by authentication/authorization.
*
* @return `true` if auth needs to run
*/
bool AccessControlled() const;
/**
* Gets a role from the storage.
*
@ -173,6 +213,37 @@ class Auth final {
*/
std::optional<Role> GetRole(const std::string &rolename) const;
std::optional<UserOrRole> GetUserOrRole(const std::optional<std::string> &username,
const std::optional<std::string> &rolename) const {
auto expect = [](bool condition, std::string &&msg) {
if (!condition) throw AuthException(std::move(msg));
};
// Special case if we are using a module; we must find the specified role
if (module_.IsUsed()) {
expect(username && rolename, "When using a module, a role needs to be connected to a username.");
const auto role = GetRole(*rolename);
expect(role != std::nullopt, "No role named " + *rolename);
return UserOrRole(auth::RoleWUsername{*username, *role});
}
// First check if we need to find a role
if (username && rolename) {
const auto role = GetRole(*rolename);
expect(role != std::nullopt, "No role named " + *rolename);
return UserOrRole(auth::RoleWUsername{*username, *role});
}
// We are only looking for a user
if (username) {
const auto user = GetUser(*username);
expect(user != std::nullopt, "No user named " + *username);
return *user;
}
// No user or role
return std::nullopt;
}
/**
* Saves a role object to the storage.
*
@ -229,16 +300,6 @@ class Auth final {
std::vector<User> AllUsersForRole(const std::string &rolename) const;
#ifdef MG_ENTERPRISE
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool RevokeDatabaseFromUser(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
/**
* @brief Grant access to individual database for a user.
*
@ -247,7 +308,33 @@ class Auth final {
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool GrantDatabaseToUser(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
Result GrantDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void GrantDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void GrantDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
Result DenyDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void DenyDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void DenyDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Revoke access to individual database for a user.
*
* @param db name of the database to revoke
* @param name user's username
* @return true on success
* @throw AuthException if unable to find or update the user
*/
Result RevokeDatabase(const std::string &db, const std::string &name, system::Transaction *system_tx = nullptr);
void RevokeDatabase(const std::string &db, User &user, system::Transaction *system_tx = nullptr);
void RevokeDatabase(const std::string &db, Role &role, system::Transaction *system_tx = nullptr);
/**
* @brief Delete a database from all users.
@ -265,9 +352,17 @@ class Auth final {
* @return true on success
* @throw AuthException if unable to find or update the user
*/
bool SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx = nullptr);
Result SetMainDatabase(std::string_view db, const std::string &name, system::Transaction *system_tx = nullptr);
void SetMainDatabase(std::string_view db, User &user, system::Transaction *system_tx = nullptr);
void SetMainDatabase(std::string_view db, Role &role, system::Transaction *system_tx = nullptr);
#endif
bool UpToDate(Epoch &e) const {
bool res = e == epoch_;
e = epoch_;
return res;
}
private:
/**
* @brief
@ -278,11 +373,18 @@ class Auth final {
*/
bool NameRegexMatch(const std::string &user_or_role) const;
void UpdateEpoch() { ++epoch_; }
void DisableIfModuleUsed() const {
if (module_.IsUsed()) throw AuthException("Operation not permited when using an authentication module.");
}
// Even though the `kvstore::KVStore` class is guaranteed to be thread-safe,
// Auth is not thread-safe because modifying users and roles might require
// more than one operation on the storage.
kvstore::KVStore storage_;
auth::Module module_;
Config config_;
Epoch epoch_{kStartEpoch};
};
} // namespace memgraph::auth

View File

@ -8,10 +8,12 @@
#pragma once
#include <json/json.hpp>
#include <cstdint>
#include <optional>
#include <string>
#include <json/json.hpp>
namespace memgraph::auth {
/// Need to be stable, auth durability depends on this
enum class PasswordHashAlgorithm : uint8_t { BCRYPT = 0, SHA256 = 1, SHA256_MULTIPLE = 2 };

View File

@ -425,10 +425,11 @@ Role::Role(const std::string &rolename, const Permissions &permissions)
: rolename_(utils::ToLowerCase(rolename)), permissions_(permissions) {}
#ifdef MG_ENTERPRISE
Role::Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler)
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access)
: rolename_(utils::ToLowerCase(rolename)),
permissions_(permissions),
fine_grained_access_handler_(std::move(fine_grained_access_handler)) {}
fine_grained_access_handler_(std::move(fine_grained_access_handler)),
db_access_(std::move(db_access)) {}
#endif
const std::string &Role::rolename() const { return rolename_; }
@ -454,8 +455,10 @@ nlohmann::json Role::Serialize() const {
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
data[kFineGrainedAccessHandler] = fine_grained_access_handler_.Serialize();
data[kDatabases] = db_access_.Serialize();
} else {
data[kFineGrainedAccessHandler] = {};
data[kDatabases] = {};
}
#endif
return data;
@ -471,12 +474,21 @@ Role Role::Deserialize(const nlohmann::json &data) {
auto permissions = Permissions::Deserialize(data[kPermissions]);
#ifdef MG_ENTERPRISE
if (memgraph::license::global_license_checker.IsEnterpriseValidFast()) {
Databases db_access;
if (data[kDatabases].is_structured()) {
db_access = Databases::Deserialize(data[kDatabases]);
} else {
// Back-compatibility
spdlog::warn("Role without specified database access. Given access to the default database.");
db_access.Grant(dbms::kDefaultDB);
db_access.SetMain(dbms::kDefaultDB);
}
FineGrainedAccessHandler fine_grained_access_handler;
// We can have an empty fine_grained if the user was created without a valid license
if (data[kFineGrainedAccessHandler].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data[kFineGrainedAccessHandler]);
}
return {data[kRoleName], permissions, std::move(fine_grained_access_handler)};
return {data[kRoleName], permissions, std::move(fine_grained_access_handler), std::move(db_access)};
}
#endif
return {data[kRoleName], permissions};
@ -493,7 +505,7 @@ bool operator==(const Role &first, const Role &second) {
}
#ifdef MG_ENTERPRISE
void Databases::Add(std::string_view db) {
void Databases::Grant(std::string_view db) {
if (allow_all_) {
grants_dbs_.clear();
allow_all_ = false;
@ -502,19 +514,19 @@ void Databases::Add(std::string_view db) {
denies_dbs_.erase(std::string{db}); // TODO: C++23 use transparent key compare
}
void Databases::Remove(const std::string &db) {
void Databases::Deny(const std::string &db) {
denies_dbs_.emplace(db);
grants_dbs_.erase(db);
}
void Databases::Delete(const std::string &db) {
void Databases::Revoke(const std::string &db) {
denies_dbs_.erase(db);
if (!allow_all_) {
grants_dbs_.erase(db);
}
// Reset if default deleted
if (default_db_ == db) {
default_db_ = "";
if (main_db_ == db) {
main_db_ = "";
}
}
@ -530,9 +542,16 @@ void Databases::DenyAll() {
denies_dbs_.clear();
}
bool Databases::SetDefault(std::string_view db) {
void Databases::RevokeAll() {
allow_all_ = false;
grants_dbs_.clear();
denies_dbs_.clear();
main_db_ = "";
}
bool Databases::SetMain(std::string_view db) {
if (!Contains(db)) return false;
default_db_ = db;
main_db_ = db;
return true;
}
@ -540,11 +559,11 @@ bool Databases::SetDefault(std::string_view db) {
return !denies_dbs_.contains(db) && (allow_all_ || grants_dbs_.contains(db));
}
const std::string &Databases::GetDefault() const {
if (!Contains(default_db_)) {
throw AuthException("No access to the set default database \"{}\".", default_db_);
const std::string &Databases::GetMain() const {
if (!Contains(main_db_)) {
throw AuthException("No access to the set default database \"{}\".", main_db_);
}
return default_db_;
return main_db_;
}
nlohmann::json Databases::Serialize() const {
@ -552,7 +571,7 @@ nlohmann::json Databases::Serialize() const {
data[kGrants] = grants_dbs_;
data[kDenies] = denies_dbs_;
data[kAllowAll] = allow_all_;
data[kDefault] = default_db_;
data[kDefault] = main_db_;
return data;
}
@ -719,15 +738,16 @@ User User::Deserialize(const nlohmann::json &data) {
} else {
// Back-compatibility
spdlog::warn("User without specified database access. Given access to the default database.");
db_access.Add(dbms::kDefaultDB);
db_access.SetDefault(dbms::kDefaultDB);
db_access.Grant(dbms::kDefaultDB);
db_access.SetMain(dbms::kDefaultDB);
}
FineGrainedAccessHandler fine_grained_access_handler;
// We can have an empty fine_grained if the user was created without a valid license
if (data[kFineGrainedAccessHandler].is_object()) {
fine_grained_access_handler = FineGrainedAccessHandler::Deserialize(data[kFineGrainedAccessHandler]);
}
return {data[kUsername], std::move(password_hash), permissions, std::move(fine_grained_access_handler), db_access};
return {data[kUsername], std::move(password_hash), permissions, std::move(fine_grained_access_handler),
std::move(db_access)};
}
#endif
return {data[kUsername], std::move(password_hash), permissions};

View File

@ -205,52 +205,10 @@ class FineGrainedAccessHandler final {
bool operator==(const FineGrainedAccessHandler &first, const FineGrainedAccessHandler &second);
#endif
class Role final {
public:
Role() = default;
explicit Role(const std::string &rolename);
Role(const std::string &rolename, const Permissions &permissions);
#ifdef MG_ENTERPRISE
Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler);
#endif
Role(const Role &) = default;
Role &operator=(const Role &) = default;
Role(Role &&) noexcept = default;
Role &operator=(Role &&) noexcept = default;
~Role() = default;
const std::string &rolename() const;
const Permissions &permissions() const;
Permissions &permissions();
#ifdef MG_ENTERPRISE
const FineGrainedAccessHandler &fine_grained_access_handler() const;
FineGrainedAccessHandler &fine_grained_access_handler();
const FineGrainedAccessPermissions &GetFineGrainedAccessLabelPermissions() const;
const FineGrainedAccessPermissions &GetFineGrainedAccessEdgeTypePermissions() const;
#endif
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
static Role Deserialize(const nlohmann::json &data);
friend bool operator==(const Role &first, const Role &second);
private:
std::string rolename_;
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
#endif
};
bool operator==(const Role &first, const Role &second);
#ifdef MG_ENTERPRISE
class Databases final {
public:
Databases() : grants_dbs_{std::string{dbms::kDefaultDB}}, allow_all_(false), default_db_(dbms::kDefaultDB) {}
Databases() : grants_dbs_{std::string{dbms::kDefaultDB}}, allow_all_(false), main_db_(dbms::kDefaultDB) {}
Databases(const Databases &) = default;
Databases &operator=(const Databases &) = default;
@ -263,7 +221,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Add(std::string_view db);
void Grant(std::string_view db);
/**
* @brief Remove database to the list of granted access.
@ -272,7 +230,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Remove(const std::string &db);
void Deny(const std::string &db);
/**
* @brief Called when database is dropped. Removes it from granted (if allow_all is false) and denied set.
@ -280,7 +238,7 @@ class Databases final {
*
* @param db name of the database to grant access to
*/
void Delete(const std::string &db);
void Revoke(const std::string &db);
/**
* @brief Set allow_all_ to true and clears grants and denied sets.
@ -292,10 +250,15 @@ class Databases final {
*/
void DenyAll();
/**
* @brief Set allow_all_ to false and clears grants and denied sets.
*/
void RevokeAll();
/**
* @brief Set the default database.
*/
bool SetDefault(std::string_view db);
bool SetMain(std::string_view db);
/**
* @brief Checks if access is grated to the database.
@ -304,11 +267,13 @@ class Databases final {
* @return true if allow_all and not denied or granted
*/
bool Contains(std::string_view db) const;
bool Denies(std::string_view db_name) const { return denies_dbs_.contains(db_name); }
bool Grants(std::string_view db_name) const { return allow_all_ || grants_dbs_.contains(db_name); }
bool GetAllowAll() const { return allow_all_; }
const std::set<std::string, std::less<>> &GetGrants() const { return grants_dbs_; }
const std::set<std::string, std::less<>> &GetDenies() const { return denies_dbs_; }
const std::string &GetDefault() const;
const std::string &GetMain() const;
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
@ -320,15 +285,69 @@ class Databases final {
: grants_dbs_(std::move(grant)),
denies_dbs_(std::move(deny)),
allow_all_(allow_all),
default_db_(std::move(default_db)) {}
main_db_(std::move(default_db)) {}
std::set<std::string, std::less<>> grants_dbs_; //!< set of databases with granted access
std::set<std::string, std::less<>> denies_dbs_; //!< set of databases with denied access
bool allow_all_; //!< flag to allow access to everything (denied overrides this)
std::string default_db_; //!< user's default database
std::string main_db_; //!< user's default database
};
#endif
class Role {
public:
Role() = default;
explicit Role(const std::string &rolename);
Role(const std::string &rolename, const Permissions &permissions);
#ifdef MG_ENTERPRISE
Role(const std::string &rolename, const Permissions &permissions,
FineGrainedAccessHandler fine_grained_access_handler, Databases db_access = {});
#endif
Role(const Role &) = default;
Role &operator=(const Role &) = default;
Role(Role &&) noexcept = default;
Role &operator=(Role &&) noexcept = default;
~Role() = default;
const std::string &rolename() const;
const Permissions &permissions() const;
Permissions &permissions();
Permissions GetPermissions() const { return permissions_; }
#ifdef MG_ENTERPRISE
const FineGrainedAccessHandler &fine_grained_access_handler() const;
FineGrainedAccessHandler &fine_grained_access_handler();
const FineGrainedAccessPermissions &GetFineGrainedAccessLabelPermissions() const;
const FineGrainedAccessPermissions &GetFineGrainedAccessEdgeTypePermissions() const;
#endif
#ifdef MG_ENTERPRISE
Databases &db_access() { return db_access_; }
const Databases &db_access() const { return db_access_; }
bool DeniesDB(std::string_view db_name) const { return db_access_.Denies(db_name); }
bool GrantsDB(std::string_view db_name) const { return db_access_.Grants(db_name); }
bool HasAccess(std::string_view db_name) const { return !DeniesDB(db_name) && GrantsDB(db_name); }
#endif
nlohmann::json Serialize() const;
/// @throw AuthException if unable to deserialize.
static Role Deserialize(const nlohmann::json &data);
friend bool operator==(const Role &first, const Role &second);
private:
std::string rolename_;
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
Databases db_access_;
#endif
};
bool operator==(const Role &first, const Role &second);
// TODO (mferencevic): Implement password expiry.
class User final {
public:
@ -388,6 +407,18 @@ class User final {
#ifdef MG_ENTERPRISE
Databases &db_access() { return database_access_; }
const Databases &db_access() const { return database_access_; }
bool DeniesDB(std::string_view db_name) const {
bool denies = database_access_.Denies(db_name);
if (role_) denies |= role_->DeniesDB(db_name);
return denies;
}
bool GrantsDB(std::string_view db_name) const {
bool grants = database_access_.Grants(db_name);
if (role_) grants |= role_->GrantsDB(db_name);
return grants;
}
bool HasAccess(std::string_view db_name) const { return !DeniesDB(db_name) && GrantsDB(db_name); }
#endif
nlohmann::json Serialize() const;
@ -403,7 +434,7 @@ class User final {
Permissions permissions_;
#ifdef MG_ENTERPRISE
FineGrainedAccessHandler fine_grained_access_handler_;
Databases database_access_;
Databases database_access_{};
#endif
std::optional<Role> role_;
};

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -403,7 +403,7 @@ nlohmann::json Module::Call(const nlohmann::json &params, int timeout_millisec)
return ret;
}
bool Module::IsUsed() { return !module_executable_path_.empty(); }
bool Module::IsUsed() const { return !module_executable_path_.empty(); }
void Module::Shutdown() {
if (pid_ == -1) return;

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Licensed as a Memgraph Enterprise file under the Memgraph Enterprise
// License (the "License"); by using this file, you agree to be bound by the terms of the License, and you may not use
@ -49,7 +49,7 @@ class Module final {
/// specified executable path and can thus be used.
///
/// @return boolean indicating whether the module can be used
bool IsUsed();
bool IsUsed() const;
~Module();

View File

@ -17,8 +17,15 @@
namespace memgraph::auth {
void LogWrongMain(const std::optional<utils::UUID> &current_main_uuid, const utils::UUID &main_req_id,
std::string_view rpc_req) {
spdlog::error(fmt::format("Received {} with main_id: {} != current_main_uuid: {}", rpc_req, std::string(main_req_id),
current_main_uuid.has_value() ? std::string(current_main_uuid.value()) : ""));
}
#ifdef MG_ENTERPRISE
void UpdateAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access, auth::SynchedAuth &auth,
void UpdateAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder) {
replication::UpdateAuthDataReq req;
memgraph::slk::Load(&req, req_reader);
@ -26,6 +33,12 @@ void UpdateAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system
using memgraph::replication::UpdateAuthDataRes;
UpdateAuthDataRes res(false);
if (!current_main_uuid.has_value() || req.main_uuid != current_main_uuid) [[unlikely]] {
LogWrongMain(current_main_uuid, req.main_uuid, replication::UpdateAuthDataReq::kType.name);
memgraph::slk::Save(res, res_builder);
return;
}
// Note: No need to check epoch, recovery mechanism is done by a full uptodate snapshot
// of the set of databases. Hence no history exists to maintain regarding epoch change.
// If MAIN has changed we need to check this new group_timestamp is consistent with
@ -53,7 +66,8 @@ void UpdateAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system
memgraph::slk::Save(res, res_builder);
}
void DropAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access, auth::SynchedAuth &auth,
void DropAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder) {
replication::DropAuthDataReq req;
memgraph::slk::Load(&req, req_reader);
@ -61,6 +75,12 @@ void DropAuthDataHandler(memgraph::system::ReplicaHandlerAccessToState &system_s
using memgraph::replication::DropAuthDataRes;
DropAuthDataRes res(false);
if (!current_main_uuid.has_value() || req.main_uuid != current_main_uuid) [[unlikely]] {
LogWrongMain(current_main_uuid, req.main_uuid, replication::DropAuthDataRes::kType.name);
memgraph::slk::Save(res, res_builder);
return;
}
// Note: No need to check epoch, recovery mechanism is done by a full uptodate snapshot
// of the set of databases. Hence no history exists to maintain regarding epoch change.
// If MAIN has changed we need to check this new group_timestamp is consistent with
@ -155,14 +175,14 @@ void Register(replication::RoleReplicaData const &data, system::ReplicaHandlerAc
auth::SynchedAuth &auth) {
// NOTE: Register even without license as the user could add a license at run-time
data.server->rpc_server_.Register<replication::UpdateAuthDataRpc>(
[system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
[&data, system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
spdlog::debug("Received UpdateAuthDataRpc");
UpdateAuthDataHandler(system_state_access, auth, req_reader, res_builder);
UpdateAuthDataHandler(system_state_access, data.uuid_, auth, req_reader, res_builder);
});
data.server->rpc_server_.Register<replication::DropAuthDataRpc>(
[system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
[&data, system_state_access, &auth](auto *req_reader, auto *res_builder) mutable {
spdlog::debug("Received DropAuthDataRpc");
DropAuthDataHandler(system_state_access, auth, req_reader, res_builder);
DropAuthDataHandler(system_state_access, data.uuid_, auth, req_reader, res_builder);
});
}
#endif

View File

@ -17,10 +17,16 @@
#include "system/state.hpp"
namespace memgraph::auth {
void LogWrongMain(const std::optional<utils::UUID> &current_main_uuid, const utils::UUID &main_req_id,
std::string_view rpc_req);
#ifdef MG_ENTERPRISE
void UpdateAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access, auth::SynchedAuth &auth,
void UpdateAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder);
void DropAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access, auth::SynchedAuth &auth,
void DropAuthDataHandler(system::ReplicaHandlerAccessToState &system_state_access,
const std::optional<utils::UUID> &current_main_uuid, auth::SynchedAuth &auth,
slk::Reader *req_reader, slk::Builder *res_builder);
bool SystemRecoveryHandler(auth::SynchedAuth &auth, auth::Auth::Config auth_config,

View File

@ -18,11 +18,9 @@
#include "utils/enum.hpp"
namespace memgraph::slk {
// Serialize code for auth::Role
void Save(const auth::Role &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.Serialize().dump(), builder);
}
void Save(const auth::Role &self, Builder *builder) { memgraph::slk::Save(self.Serialize().dump(), builder); }
namespace {
auth::Role LoadAuthRole(memgraph::slk::Reader *reader) {
std::string tmp;
@ -89,6 +87,7 @@ void Load(auth::Auth::Config *self, memgraph::slk::Reader *reader) {
// Serialize code for UpdateAuthDataReq
void Save(const memgraph::replication::UpdateAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.main_uuid, builder);
memgraph::slk::Save(self.epoch_id, builder);
memgraph::slk::Save(self.expected_group_timestamp, builder);
memgraph::slk::Save(self.new_group_timestamp, builder);
@ -96,6 +95,7 @@ void Save(const memgraph::replication::UpdateAuthDataReq &self, memgraph::slk::B
memgraph::slk::Save(self.role, builder);
}
void Load(memgraph::replication::UpdateAuthDataReq *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->main_uuid, reader);
memgraph::slk::Load(&self->epoch_id, reader);
memgraph::slk::Load(&self->expected_group_timestamp, reader);
memgraph::slk::Load(&self->new_group_timestamp, reader);
@ -113,6 +113,7 @@ void Load(memgraph::replication::UpdateAuthDataRes *self, memgraph::slk::Reader
// Serialize code for DropAuthDataReq
void Save(const memgraph::replication::DropAuthDataReq &self, memgraph::slk::Builder *builder) {
memgraph::slk::Save(self.main_uuid, builder);
memgraph::slk::Save(self.epoch_id, builder);
memgraph::slk::Save(self.expected_group_timestamp, builder);
memgraph::slk::Save(self.new_group_timestamp, builder);
@ -120,6 +121,7 @@ void Save(const memgraph::replication::DropAuthDataReq &self, memgraph::slk::Bui
memgraph::slk::Save(self.name, builder);
}
void Load(memgraph::replication::DropAuthDataReq *self, memgraph::slk::Reader *reader) {
memgraph::slk::Load(&self->main_uuid, reader);
memgraph::slk::Load(&self->epoch_id, reader);
memgraph::slk::Load(&self->expected_group_timestamp, reader);
memgraph::slk::Load(&self->new_group_timestamp, reader);

View File

@ -27,17 +27,22 @@ struct UpdateAuthDataReq {
static void Load(UpdateAuthDataReq *self, memgraph::slk::Reader *reader);
static void Save(const UpdateAuthDataReq &self, memgraph::slk::Builder *builder);
UpdateAuthDataReq() = default;
UpdateAuthDataReq(std::string epoch_id, uint64_t expected_ts, uint64_t new_ts, auth::User user)
: epoch_id{std::move(epoch_id)},
UpdateAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
auth::User user)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
user{std::move(user)} {}
UpdateAuthDataReq(std::string epoch_id, uint64_t expected_ts, uint64_t new_ts, auth::Role role)
: epoch_id{std::move(epoch_id)},
UpdateAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
auth::Role role)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
role{std::move(role)} {}
utils::UUID main_uuid;
std::string epoch_id;
uint64_t expected_group_timestamp;
uint64_t new_group_timestamp;
@ -69,13 +74,16 @@ struct DropAuthDataReq {
enum class DataType { USER, ROLE };
DropAuthDataReq(std::string epoch_id, uint64_t expected_ts, uint64_t new_ts, DataType type, std::string_view name)
: epoch_id{std::move(epoch_id)},
DropAuthDataReq(const utils::UUID &main_uuid, std::string epoch_id, uint64_t expected_ts, uint64_t new_ts,
DataType type, std::string_view name)
: main_uuid(main_uuid),
epoch_id{std::move(epoch_id)},
expected_group_timestamp{expected_ts},
new_group_timestamp{new_ts},
type{type},
name{name} {}
utils::UUID main_uuid;
std::string epoch_id;
uint64_t expected_group_timestamp;
uint64_t new_group_timestamp;

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -15,6 +15,9 @@
#include "communication/bolt/v1/value.hpp"
#include "utils/logging.hpp"
#include "communication/bolt/v1/fmt.hpp"
#include "io/network/fmt.hpp"
namespace {
constexpr uint8_t kBoltV43Version[4] = {0x00, 0x00, 0x03, 0x04};
constexpr uint8_t kEmptyBoltVersion[4] = {0x00, 0x00, 0x00, 0x00};

View File

@ -0,0 +1,27 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#if FMT_VERSION > 90000
#include <fmt/ostream.h>
#include "communication/bolt/v1/value.hpp"
template <>
class fmt::formatter<memgraph::communication::bolt::Value> : public fmt::ostream_formatter {};
template <>
class fmt::formatter<std::vector<memgraph::communication::bolt::Value>> : public fmt::ostream_formatter {};
template <>
class fmt::formatter<std::map<std::string, memgraph::communication::bolt::Value>> : public fmt::ostream_formatter {};
#endif

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -88,6 +88,12 @@ class Session {
virtual void Configure(const std::map<std::string, memgraph::communication::bolt::Value> &run_time_info) = 0;
#ifdef MG_ENTERPRISE
virtual auto Route(std::map<std::string, Value> const &routing,
std::vector<memgraph::communication::bolt::Value> const &bookmarks,
std::map<std::string, Value> const &extra) -> std::map<std::string, Value> = 0;
#endif
/**
* Put results of the processed query in the `encoder`.
*

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -79,9 +79,9 @@ State RunHandlerV4(Signature signature, TSession &session, State state, Marker m
}
case Signature::Route: {
if constexpr (bolt_minor >= 3) {
if (signature == Signature::Route) return HandleRoute<TSession>(session, marker);
return HandleRoute<TSession>(session, marker);
} else {
spdlog::trace("Supported only in bolt v4.3");
spdlog::trace("Supported only in bolt versions >= 4.3");
return State::Close;
}
}

View File

@ -478,9 +478,6 @@ State HandleGoodbye() {
template <typename TSession>
State HandleRoute(TSession &session, const Marker marker) {
// Route message is not implemented since it is Neo4j specific, therefore we will receive it and inform user that
// there is no implementation. Before that, we have to read out the fields from the buffer to leave it in a clean
// state.
if (marker != Marker::TinyStruct3) {
spdlog::trace("Expected TinyStruct3 marker, but received 0x{:02x}!", utils::UnderlyingCast(marker));
return State::Close;
@ -496,11 +493,27 @@ State HandleRoute(TSession &session, const Marker marker) {
spdlog::trace("Couldn't read bookmarks field!");
return State::Close;
}
// TODO: (andi) Fix Bolt versions
Value db;
if (!session.decoder_.ReadValue(&db)) {
spdlog::trace("Couldn't read db field!");
return State::Close;
}
#ifdef MG_ENTERPRISE
try {
auto res = session.Route(routing.ValueMap(), bookmarks.ValueList(), {});
if (!session.encoder_.MessageSuccess(std::move(res))) {
spdlog::trace("Couldn't send result of routing!");
return State::Close;
}
return State::Idle;
} catch (const std::exception &e) {
return HandleFailure(session, e);
}
#else
session.encoder_buffer_.Clear();
bool fail_sent =
session.encoder_.MessageFailure({{"code", "66"}, {"message", "Route message is not supported in Memgraph!"}});
@ -509,6 +522,7 @@ State HandleRoute(TSession &session, const Marker marker) {
return State::Close;
}
return State::Error;
#endif
}
template <typename TSession>

20
src/communication/fmt.hpp Normal file
View File

@ -0,0 +1,20 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#if FMT_VERSION > 90000
#include <fmt/ostream.h>
#include <boost/asio/ip/tcp.hpp>
template <>
class fmt::formatter<boost::asio::ip::tcp::endpoint> : public fmt::ostream_formatter {};
#endif

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -21,6 +21,7 @@
#include <boost/beast/core.hpp>
#include "communication/context.hpp"
#include "communication/fmt.hpp"
#include "communication/http/session.hpp"
#include "utils/spin_lock.hpp"
#include "utils/synchronized.hpp"
@ -82,7 +83,7 @@ class Listener final : public std::enable_shared_from_this<Listener<TRequestHand
return;
}
spdlog::info("HTTP server is listening on {}:{}", endpoint.address(), endpoint.port());
spdlog::info("HTTP server is listening on {}", endpoint);
}
void DoAccept() {

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -23,6 +23,7 @@
#include "communication/session.hpp"
#include "io/network/epoll.hpp"
#include "io/network/fmt.hpp"
#include "io/network/socket.hpp"
#include "utils/logging.hpp"
#include "utils/signals.hpp"

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -22,6 +22,7 @@
#include "communication/init.hpp"
#include "communication/listener.hpp"
#include "io/network/fmt.hpp"
#include "io/network/socket.hpp"
#include "utils/logging.hpp"
#include "utils/message.hpp"

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -26,6 +26,7 @@
#include <boost/asio/ip/tcp.hpp>
#include "communication/context.hpp"
#include "communication/fmt.hpp"
#include "communication/init.hpp"
#include "communication/v2/listener.hpp"
#include "communication/v2/pool.hpp"
@ -129,7 +130,7 @@ bool Server<TSession, TSessionContext>::Start() {
listener_->Start();
spdlog::info("{} server is fully armed and operational", service_name_);
spdlog::info("{} listening on {}", service_name_, endpoint_.address());
spdlog::info("{} listening on {}", service_name_, endpoint_);
context_thread_pool_.Run();
return true;

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -47,6 +47,7 @@
#include "communication/buffer.hpp"
#include "communication/context.hpp"
#include "communication/exceptions.hpp"
#include "communication/fmt.hpp"
#include "dbms/global.hpp"
#include "utils/event_counter.hpp"
#include "utils/logging.hpp"
@ -212,14 +213,11 @@ class WebsocketSession : public std::enable_shared_from_this<WebsocketSession<TS
session_.Execute();
DoRead();
} catch (const SessionClosedException &e) {
spdlog::info("{} client {}:{} closed the connection.", service_name_, remote_endpoint_.address(),
remote_endpoint_.port());
spdlog::info("{} client {} closed the connection.", service_name_, remote_endpoint_);
DoClose();
} catch (const std::exception &e) {
spdlog::error(
"Exception was thrown while processing event in {} session "
"associated with {}:{}",
service_name_, remote_endpoint_.address(), remote_endpoint_.port());
spdlog::error("Exception was thrown while processing event in {} session associated with {}", service_name_,
remote_endpoint_);
spdlog::debug("Exception message: {}", e.what());
DoClose();
}
@ -376,8 +374,7 @@ class Session final : public std::enable_shared_from_this<Session<TSession, TSes
socket.lowest_layer().non_blocking(false);
});
timeout_timer_.expires_at(boost::asio::steady_timer::time_point::max());
spdlog::info("Accepted a connection from {}: {}:{}", service_name_, remote_endpoint_.address(),
remote_endpoint_.port());
spdlog::info("Accepted a connection from {}: {}", service_name_, remote_endpoint_);
}
void DoRead() {
@ -437,14 +434,11 @@ class Session final : public std::enable_shared_from_this<Session<TSession, TSes
session_.Execute();
DoRead();
} catch (const SessionClosedException &e) {
spdlog::info("{} client {}:{} closed the connection.", service_name_, remote_endpoint_.address(),
remote_endpoint_.port());
spdlog::info("{} client {} closed the connection.", service_name_, remote_endpoint_);
DoShutdown();
} catch (const std::exception &e) {
spdlog::error(
"Exception was thrown while processing event in {} session "
"associated with {}:{}",
service_name_, remote_endpoint_.address(), remote_endpoint_.port());
spdlog::error("Exception was thrown while processing event in {} session associated with {}", service_name_,
remote_endpoint_);
spdlog::debug("Exception message: {}", e.what());
DoShutdown();
}

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -12,19 +12,44 @@
#include "communication/websocket/auth.hpp"
#include <string>
#include "utils/variant_helpers.hpp"
namespace memgraph::communication::websocket {
bool SafeAuth::Authenticate(const std::string &username, const std::string &password) const {
return auth_->Lock()->Authenticate(username, password).has_value();
user_or_role_ = auth_->Lock()->Authenticate(username, password);
return user_or_role_.has_value();
}
bool SafeAuth::HasUserPermission(const std::string &username, const auth::Permission permission) const {
if (const auto user = auth_->ReadLock()->GetUser(username); user) {
return user->GetPermissions().Has(permission) == auth::PermissionLevel::GRANT;
bool SafeAuth::HasPermission(const auth::Permission permission) const {
auto locked_auth = auth_->ReadLock();
// Update if cache invalidated
if (!locked_auth->UpToDate(auth_epoch_) && user_or_role_) {
bool success = true;
std::visit(utils::Overloaded{[&](auth::User &user) {
auto tmp = locked_auth->GetUser(user.username());
if (!tmp) success = false;
user = std::move(*tmp);
},
[&](auth::Role &role) {
auto tmp = locked_auth->GetRole(role.rolename());
if (!tmp) success = false;
role = std::move(*tmp);
}},
*user_or_role_);
// Missing user/role; delete from cache
if (!success) user_or_role_.reset();
}
// Check permissions
if (user_or_role_) {
return std::visit(utils::Overloaded{[&](auto &user_or_role) {
return user_or_role.GetPermissions().Has(permission) == auth::PermissionLevel::GRANT;
}},
*user_or_role_);
}
// NOTE: websocket authenticates only if there is a user, so no need to check if access controlled
return false;
}
bool SafeAuth::HasAnyUsers() const { return auth_->ReadLock()->HasUsers(); }
bool SafeAuth::AccessControlled() const { return auth_->ReadLock()->AccessControlled(); }
} // namespace memgraph::communication::websocket

View File

@ -21,9 +21,9 @@ class AuthenticationInterface {
public:
virtual bool Authenticate(const std::string &username, const std::string &password) const = 0;
virtual bool HasUserPermission(const std::string &username, auth::Permission permission) const = 0;
virtual bool HasPermission(auth::Permission permission) const = 0;
virtual bool HasAnyUsers() const = 0;
virtual bool AccessControlled() const = 0;
};
class SafeAuth : public AuthenticationInterface {
@ -32,11 +32,13 @@ class SafeAuth : public AuthenticationInterface {
bool Authenticate(const std::string &username, const std::string &password) const override;
bool HasUserPermission(const std::string &username, auth::Permission permission) const override;
bool HasPermission(auth::Permission permission) const override;
bool HasAnyUsers() const override;
bool AccessControlled() const override;
private:
auth::SynchedAuth *auth_;
mutable std::optional<auth::UserOrRole> user_or_role_;
mutable auth::Auth::Epoch auth_epoch_{};
};
} // namespace memgraph::communication::websocket

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -10,6 +10,7 @@
// licenses/APL.txt.
#include "communication/websocket/listener.hpp"
#include "communication/fmt.hpp"
namespace memgraph::communication::websocket {
namespace {
@ -61,7 +62,7 @@ Listener::Listener(boost::asio::io_context &ioc, ServerContext *context, tcp::en
return;
}
spdlog::info("WebSocket server is listening on {}:{}", endpoint.address(), endpoint.port());
spdlog::info("WebSocket server is listening on {}", endpoint);
}
void Listener::DoAccept() {

View File

@ -1,4 +1,4 @@
// Copyright 2023 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -80,7 +80,7 @@ bool Session::Run() {
return false;
}
authenticated_ = !auth_.HasAnyUsers();
authenticated_ = !auth_.AccessControlled();
connected_.store(true, std::memory_order_relaxed);
// run on the strand
@ -162,7 +162,7 @@ utils::BasicResult<std::string> Session::Authorize(const nlohmann::json &creds)
return {"Authentication failed!"};
}
#ifdef MG_ENTERPRISE
if (!auth_.HasUserPermission(creds.at("username").get<std::string>(), auth::Permission::WEBSOCKET)) {
if (!auth_.HasPermission(auth::Permission::WEBSOCKET)) {
return {"Authorization failed!"};
}
#endif

View File

@ -6,23 +6,37 @@ target_sources(mg-coordination
include/coordination/coordinator_state.hpp
include/coordination/coordinator_rpc.hpp
include/coordination/coordinator_server.hpp
include/coordination/coordinator_config.hpp
include/coordination/coordinator_communication_config.hpp
include/coordination/coordinator_exceptions.hpp
include/coordination/coordinator_instance.hpp
include/coordination/coordinator_slk.hpp
include/coordination/coordinator_data.hpp
include/coordination/constants.hpp
include/coordination/coordinator_cluster_config.hpp
include/coordination/coordinator_instance.hpp
include/coordination/coordinator_handlers.hpp
include/coordination/instance_status.hpp
include/coordination/replication_instance.hpp
include/coordination/raft_state.hpp
include/coordination/rpc_errors.hpp
include/nuraft/raft_log_action.hpp
include/nuraft/coordinator_cluster_state.hpp
include/nuraft/coordinator_log_store.hpp
include/nuraft/coordinator_state_machine.hpp
include/nuraft/coordinator_state_manager.hpp
PRIVATE
coordinator_communication_config.cpp
coordinator_client.cpp
coordinator_state.cpp
coordinator_rpc.cpp
coordinator_server.cpp
coordinator_data.cpp
coordinator_instance.cpp
coordinator_handlers.cpp
coordinator_instance.cpp
replication_instance.cpp
raft_state.cpp
coordinator_log_store.cpp
coordinator_state_machine.cpp
coordinator_state_manager.cpp
coordinator_cluster_state.cpp
)
target_include_directories(mg-coordination PUBLIC include)

View File

@ -9,53 +9,74 @@
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include "utils/uuid.hpp"
#ifdef MG_ENTERPRISE
#include "coordination/coordinator_client.hpp"
#include "coordination/coordinator_config.hpp"
#include "coordination/coordinator_communication_config.hpp"
#include "coordination/coordinator_rpc.hpp"
#include "replication_coordination_glue/common.hpp"
#include "replication_coordination_glue/messages.hpp"
#include "utils/result.hpp"
namespace memgraph::coordination {
namespace {
auto CreateClientContext(memgraph::coordination::CoordinatorClientConfig const &config)
auto CreateClientContext(memgraph::coordination::CoordinatorToReplicaConfig const &config)
-> communication::ClientContext {
return (config.ssl) ? communication::ClientContext{config.ssl->key_file, config.ssl->cert_file}
: communication::ClientContext{};
}
} // namespace
CoordinatorClient::CoordinatorClient(CoordinatorData *coord_data, CoordinatorClientConfig config,
HealthCheckCallback succ_cb, HealthCheckCallback fail_cb)
CoordinatorClient::CoordinatorClient(CoordinatorInstance *coord_instance, CoordinatorToReplicaConfig config,
HealthCheckClientCallback succ_cb, HealthCheckClientCallback fail_cb)
: rpc_context_{CreateClientContext(config)},
rpc_client_{io::network::Endpoint(io::network::Endpoint::needs_resolving, config.ip_address, config.port),
&rpc_context_},
rpc_client_{config.mgt_server, &rpc_context_},
config_{std::move(config)},
coord_data_{coord_data},
coord_instance_{coord_instance},
succ_cb_{std::move(succ_cb)},
fail_cb_{std::move(fail_cb)} {}
auto CoordinatorClient::InstanceName() const -> std::string { return config_.instance_name; }
auto CoordinatorClient::SocketAddress() const -> std::string { return rpc_client_.Endpoint().SocketAddress(); }
auto CoordinatorClient::CoordinatorSocketAddress() const -> std::string { return config_.CoordinatorSocketAddress(); }
auto CoordinatorClient::ReplicationSocketAddress() const -> std::string { return config_.ReplicationSocketAddress(); }
auto CoordinatorClient::InstanceDownTimeoutSec() const -> std::chrono::seconds {
return config_.instance_down_timeout_sec;
}
auto CoordinatorClient::InstanceGetUUIDFrequencySec() const -> std::chrono::seconds {
return config_.instance_get_uuid_frequency_sec;
}
void CoordinatorClient::StartFrequentCheck() {
MG_ASSERT(config_.health_check_frequency_sec > std::chrono::seconds(0),
if (instance_checker_.IsRunning()) {
return;
}
MG_ASSERT(config_.instance_health_check_frequency_sec > std::chrono::seconds(0),
"Health check frequency must be greater than 0");
instance_checker_.Run(
config_.instance_name, config_.health_check_frequency_sec, [this, instance_name = config_.instance_name] {
config_.instance_name, config_.instance_health_check_frequency_sec,
[this, instance_name = config_.instance_name] {
try {
spdlog::trace("Sending frequent heartbeat to machine {} on {}", instance_name,
rpc_client_.Endpoint().SocketAddress());
config_.CoordinatorSocketAddress());
{ // NOTE: This is intentionally scoped so that stream lock could get released.
auto stream{rpc_client_.Stream<memgraph::replication_coordination_glue::FrequentHeartbeatRpc>()};
stream.AwaitResponse();
}
succ_cb_(coord_data_, instance_name);
// Subtle race condition:
// acquiring of lock needs to happen before function call, as function callback can be changed
// for instance after lock is already acquired
// (failover case when instance is promoted to MAIN)
succ_cb_(coord_instance_, instance_name);
} catch (rpc::RpcFailedException const &) {
fail_cb_(coord_data_, instance_name);
fail_cb_(coord_instance_, instance_name);
}
});
}
@ -64,23 +85,21 @@ void CoordinatorClient::StopFrequentCheck() { instance_checker_.Stop(); }
void CoordinatorClient::PauseFrequentCheck() { instance_checker_.Pause(); }
void CoordinatorClient::ResumeFrequentCheck() { instance_checker_.Resume(); }
auto CoordinatorClient::SetCallbacks(HealthCheckCallback succ_cb, HealthCheckCallback fail_cb) -> void {
succ_cb_ = std::move(succ_cb);
fail_cb_ = std::move(fail_cb);
auto CoordinatorClient::ReplicationClientInfo() const -> coordination::ReplicationClientInfo {
return config_.replication_client_info;
}
auto CoordinatorClient::ReplicationClientInfo() const -> ReplClientInfo { return config_.replication_client_info; }
auto CoordinatorClient::SendPromoteReplicaToMainRpc(ReplicationClientsInfo replication_clients_info) const -> bool {
auto CoordinatorClient::SendPromoteReplicaToMainRpc(const utils::UUID &uuid,
ReplicationClientsInfo replication_clients_info) const -> bool {
try {
auto stream{rpc_client_.Stream<PromoteReplicaToMainRpc>(std::move(replication_clients_info))};
auto stream{rpc_client_.Stream<PromoteReplicaToMainRpc>(uuid, std::move(replication_clients_info))};
if (!stream.AwaitResponse().success) {
spdlog::error("Failed to receive successful RPC failover response!");
spdlog::error("Failed to receive successful PromoteReplicaToMainRpc response!");
return false;
}
return true;
} catch (rpc::RpcFailedException const &) {
spdlog::error("RPC error occurred while sending failover RPC!");
spdlog::error("RPC error occurred while sending PromoteReplicaToMainRpc!");
}
return false;
}
@ -101,5 +120,71 @@ auto CoordinatorClient::DemoteToReplica() const -> bool {
return false;
}
auto CoordinatorClient::SendSwapMainUUIDRpc(utils::UUID const &uuid) const -> bool {
try {
auto stream{rpc_client_.Stream<replication_coordination_glue::SwapMainUUIDRpc>(uuid)};
if (!stream.AwaitResponse().success) {
spdlog::error("Failed to receive successful RPC swapping of uuid response!");
return false;
}
return true;
} catch (const rpc::RpcFailedException &) {
spdlog::error("RPC error occurred while sending swapping uuid RPC!");
}
return false;
}
auto CoordinatorClient::SendUnregisterReplicaRpc(std::string_view instance_name) const -> bool {
try {
auto stream{rpc_client_.Stream<UnregisterReplicaRpc>(instance_name)};
if (!stream.AwaitResponse().success) {
spdlog::error("Failed to receive successful RPC response for unregistering replica!");
return false;
}
return true;
} catch (rpc::RpcFailedException const &) {
spdlog::error("Failed to unregister replica!");
}
return false;
}
auto CoordinatorClient::SendGetInstanceUUIDRpc() const
-> utils::BasicResult<GetInstanceUUIDError, std::optional<utils::UUID>> {
try {
auto stream{rpc_client_.Stream<GetInstanceUUIDRpc>()};
auto res = stream.AwaitResponse();
return res.uuid;
} catch (const rpc::RpcFailedException &) {
spdlog::error("RPC error occured while sending GetInstance UUID RPC");
return GetInstanceUUIDError::RPC_EXCEPTION;
}
}
auto CoordinatorClient::SendEnableWritingOnMainRpc() const -> bool {
try {
auto stream{rpc_client_.Stream<EnableWritingOnMainRpc>()};
if (!stream.AwaitResponse().success) {
spdlog::error("Failed to receive successful RPC response for enabling writing on main!");
return false;
}
return true;
} catch (rpc::RpcFailedException const &) {
spdlog::error("Failed to enable writing on main!");
}
return false;
}
auto CoordinatorClient::SendGetInstanceTimestampsRpc() const
-> utils::BasicResult<GetInstanceUUIDError, replication_coordination_glue::DatabaseHistories> {
try {
auto stream{rpc_client_.Stream<coordination::GetDatabaseHistoriesRpc>()};
return stream.AwaitResponse().database_histories;
} catch (const rpc::RpcFailedException &) {
spdlog::error("RPC error occured while sending GetInstance UUID RPC");
return GetInstanceUUIDError::RPC_EXCEPTION;
}
}
} // namespace memgraph::coordination
#endif

Some files were not shown because too many files have changed in this diff Show More