Compare commits

...

534 Commits

Author SHA1 Message Date
Andi
3faa64a53c
Forbid TSAN with Jemalloc (#1842) 2024-03-25 07:20:55 +00:00
Andi
581767b491
Bump googletest to 1.14 (#1845) 2024-03-23 14:45:12 +00:00
Andi
b228b431a8
Fix installation of mgcxx (#1849) 2024-03-22 17:31:01 +01:00
Antonio Filipovic
13e3a1d0f7
Add distributed locks in HA (#1819)
- Add distributed locks
- Fix the wrong MAIN state on the follower coordinator
- Fix wrong main doing failover
2024-03-22 11:34:33 +00:00
Marko Barišić
89e13109d7
Fix jepsen nodes not starting up healthy (#1846)
* add a loop to check if all nodes started correctly and restart if any failed
2024-03-21 18:39:40 +01:00
DavIvek
56be736d30
Fix and update mgbench (#1838) 2024-03-21 12:34:59 +00:00
DavIvek
a3d2474c5b
Fix timestamps saving on-disk (#1811) 2024-03-21 10:50:55 +00:00
Andi
0913e95167
Rename HA startup flags (#1820) 2024-03-21 09:12:28 +00:00
Andi
f699c0b37f
Support bolt+routing (#1796) 2024-03-21 06:41:26 +00:00
Ante Pušić
9629f10166
Text search (#1603, #1739)
Add text search:
* named property search
* all-property search
* regex search
* aggregation over search results

Text search works with:
* non-parallel transactions
* durability (WAL files and snapshots)
* multitenancy
2024-03-20 10:29:24 +01:00
Marko Barišić
2ac649f3b5
Upgrade jepsen (#1594)
* Try with jepsen v0.3.5
* Add a few WIP adjustments
* Add replication restore state on startup flag
* Fix some run.sh scripts issues
* Improve cluster commands
* Run Jepsen on debian-12 with toolchain v5
---------
Co-authored-by: Marko Budiselic <mbudiselicbuda@gmail.com>
2024-03-18 16:38:58 +01:00
Marko Barišić
ec8536e11b
Make diff run on push to master again (#1826)
* Add workflow dispatch and run on push to master
2024-03-18 11:58:34 +01:00
Marko Barišić
84fe853169
Fix cargo not found when buidling in mgbuild container (#1825)
*Add source /home/mg/.cargo/env before cmake and make commands in mgbuild.sh
2024-03-18 10:47:59 +01:00
Josipmrden
082f9a7d9b
Add behaviour of no updates if vertex is updated with same value (#1791) 2024-03-15 14:45:21 +01:00
Aidar Samerkhanov
0ed2d18754
Add RollUpApply operator support to edge type index rewrite. (#1816) 2024-03-15 11:39:37 +04:00
Gareth Andrew Lloyd
8bc8e867e4
Pmr allocator unify (#1801)
Query allocator and evaluation allocator were different.
After analysis, was determined they should be the same, this will help 
future development reduce TypeValue copies during queries.

Changes:
- Common allocator, PoolResource backed by MonotonicResource
- Optimized Pool, now O(1) alloc/dealloc as all chunks in Pool form a single 
  free list
- 2nd PoolResource, using bin sizing, not as perfect for memory usage but 
  O(1) bin selection
- Now have jemalloc's background thread to make sure decay and return 
  to OS happens
- Optimized ProperyValue to be faster at destruction/copy/move
- Less temporary memory allocations
  - CSV reader now maintains a common line buffer it reuses on line reads
  - Writing out bolt values, now reuses a values buffer
  - Evaluating an int no longer makes temporary strings for errors it most 
    likely never throws
  - ExpandVariable will reuse existing edge list in frame it one existed
2024-03-14 11:21:59 -07:00
Marko Barišić
b0cdcd3483
Run CI in mgbuilder containers (#1749)
* Update deployment files for mgbuilders because of toolchain upgrade
* Fix args parameter in builder yaml files
* Add fedora 38, 39 and rockylinux 9.3 mgbuilder Dockerfiles
* Change format of ARG TOOLCHAIN_VERSION from toolchain-vX to vX
* Add function to check supported arch, build type, os and toolchain
* Add options to init subcommand
* Add image names to mgbuilders
* Add v2 of the run.sh script
* Add testing to run2.sh
* Add option for threads --thread
* Add options for enterprise license and organization name
* Make stop mgbuild container step run always
* Add --ci flag to init script
* Move init conditionals under build-memgraph flags
* Add --community flag to build-memgraph
* Change target dir inside mgbuild container
* Add node fix to debian 11, ubuntu 20.04 and ubuntu 22.04
* rm memgraph repo after installing deps
* Add mg user in Dockerfile
* Add step to install rust on all OSs
* Chown files copied into mgbuild container
* Add e2e tests
* Add jepsen test
* Bugfix: Using reference in a callback
* Bugfix: Broad target for e2e tests
* Up db info test limit
* Disable e2e streams tests
* Fix default THREADS
* Prioretize docker compose over docker-compose
* Improve selection between docker compose and docker-compose
* Install PyYAML as mg user
* Fix doxygen install for rocky linux 9.3
* Fix rocky-9.3 environment script to properly install sbcl
* Rename all rocky-9 mentions to rocky-9.3
* Add mgdeps-cache and benchgraph-api hostnames to mgbuild images
* Add logic to pull mgbuild image if missing
* Fix build errors on toolchain-v5 (#1806)
* Rename run2 script, remove run script, add small features to mgbuild.sh
* Add --no-copy flag to build-memgraph to resolve TODO
* Add timeouts to diff jobs
* Fix asio flaky clone, try mgdeps-cache first

---------

Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
Co-authored-by: Ante Pušić <ante.f.pusic@gmail.com>
Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
2024-03-14 12:19:59 +01:00
Andi
24f8a14b43
Improve registration queries in HA environment(#1809) 2024-03-13 13:04:27 +00:00
Josipmrden
2cab07429e
Add new PR template (#1798) 2024-03-13 10:09:22 +01:00
DavIvek
de2e2048ef
Support label creation via property values (#1762) 2024-03-12 12:55:40 +00:00
Gareth Andrew Lloyd
a282542666
Optimise ORDER BY, RANGE, UNWIND (#1781)
* Optimise frame change

* Optimise distinct + orderby memory usage

- dispose collections as earlier as possible
- move values rather than copy

* Better perf, ORDER BY

* Optimise RANGE and UNWIND

* ConstraintVerificationInfo only if at least one constraint

* Optimise TypeValue

* Clang-tidy fix
2024-03-12 00:26:11 +00:00
Josipmrden
462336ff78
Fix early exit for OR expression (#1738) 2024-03-11 22:44:15 +01:00
Aidar Samerkhanov
1c71d605ff
Fix PatternVisitor compilation in toolchain-v5 (#1803) 2024-03-08 19:20:40 -08:00
Antonio Filipovic
2a5388cea9
Add tests to verify log store works properly (#1794) 2024-03-08 15:16:30 +00:00
gvolfing
619b01f3f8
Implement edge type indices (#1542)
Implement edge type indices (#1542 )
2024-03-08 08:44:48 +01:00
Andi
5ca98f9543
Fix snapshot creation in RSM and forbid multiple leaders (#1788) 2024-03-07 17:40:32 +00:00
Aidar Samerkhanov
a099417c56
List Pattern Comprehension planner (#1686) 2024-03-07 18:41:02 +04:00
Antonio Filipovic
02325f8673
Fix bug prone add server to cluster behavior (#1792) 2024-03-07 11:10:33 +00:00
Katarina Supe
6f849a14df
Update cypherl transform script (#1701)
* Update cypherl transform script

* Add new script and fix typo

* Add convert to separate files script

---------

Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2024-03-07 10:04:36 +01:00
Andi
75aad72984
Improve in-memory RAFT state (#1782) 2024-03-06 09:16:46 +01:00
Antonio Filipovic
d4d4660af0
Add force sync REPLICA with MAIN (#1777) 2024-03-05 16:51:14 +00:00
Andi
1802dc93d1
Improve Raft log serialization (#1778) 2024-03-05 07:33:13 +00:00
Andi
822183b62d
Support failure of coordinators (#1728) 2024-03-04 07:24:18 +00:00
Antonio Filipovic
33caa27161
Ensure replication works on HA cluster in different scenarios (#1743) 2024-03-01 12:32:56 +01:00
Marko Barišić
f316f7db87
Add openssl to MEMGRAPH_BUILD_DEPS for amzn-2 and centos-7 (#1771) 2024-02-28 18:21:56 +01:00
Gareth Andrew Lloyd
55f224839e
Do not use UUID_STR_LEN (#1770)
Older libuuid did not have this macro, we need to publish for older
distro with older libs.
2024-02-28 17:46:03 +01:00
Antonio Filipovic
b561c61b64
HA: Add initial logic for choosing new replica (#1729) 2024-02-28 09:57:00 +00:00
DavIvek
b7de79d5a0
Fix schema.node_type_properties() and schema.rel_type_properties() (#1718) 2024-02-27 21:40:55 +00:00
Gareth Andrew Lloyd
da898be8f9
Compact Delta 80B -> 56B (#1747)
Make special structure for old_disk_key. std::optional<std::string> was
40B, which is the largest member of out action union. Replaced with 8B,
structure.

This makes largest member now vertex_edge at 24B, this means Delta is
now only 56B.

🥳🎉 Now less than a cacheline 🎊
2024-02-27 17:21:52 +00:00
Gareth Andrew Lloyd
a6fcdfd905
Make GC + snapshot, main lock friendly (#1759)
- Only IN_MEMORY_ANALYTICAL requires unique lock during snapshot
- GC in some cases will be provide with unique lock
  - This fact can be used for optimisations
  - In all other cases, optimisations should be done with alternative
    check. Not via getting a unique lock

Also:
- Faster property lookup
- Faster index iteration (better conditional branching)
2024-02-27 15:45:08 +01:00
Marko Barišić
e88c7a0aa5
Add jobs for pushing ARM packages (#1765)
* Add jobs for pushing ARM packages
2024-02-27 12:08:53 +01:00
Marko Barišić
86ff96697d
Minor update to the rc workflow (#1760)
* Increase ARM build timeout to 120 minutes

* Remove PushToS3 job and make each Package job push to S3 individually

* Expand ARM timeout to 150 minutes for added safety; revert this after release
2024-02-26 22:57:21 +01:00
andrejtonev
f4d9a3695d
Introduce multi-tenancy to SHOW REPLICAS (#1735)
---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-02-26 19:05:49 +00:00
andrejtonev
c2e9df309a
Correctly call driver v1 tests (#1630) 2024-02-26 17:28:13 +00:00
andrejtonev
82c47ee80d
GetInfo simplification (#1621)
* Removed force dir in the GetInfo functions
2024-02-26 14:55:45 +00:00
andrejtonev
6a4ef55e90
Better auth user/role handling (#1699)
* Stop auth module from creating users
* Explicit about auth policy (check if no users defined OR auth module used)
* Role supports database access definition
* Authenticate() returns user or role
* AuthChecker generates QueryUserOrRole (can be empty)
* QueryUserOrRole actually authorizes
* Add auth cache invalidation
* Better database access queries (GRANT, DENY, REVOKE DATABASE)
2024-02-22 14:00:39 +00:00
Marko Budiselić
98727e0fa0
Update operating systems (#1371) 2024-02-22 11:14:48 +01:00
Aidar Samerkhanov
9a20ac494d
In BFS expansion filter by path we should shrink path to restore state prior to expansion only if the path was changed. (#1745) 2024-02-22 05:34:08 +00:00
Marko Barišić
e302be98a2
Push successful RC builds to S3 (#1741)
* Add new workflow which calls release build workflows

* Make the workflow build packages only on RC tags

* Change artifact names to include OS name
2024-02-21 17:08:14 +01:00
Marko Budiselić
61b9bb0f59
Add toolchain-v5 compatibility Revert to C++20 (#587)
* Upgrade cppitertools, spdlog, fmt, rapidcheck
* Make compilation work on both v4 and v5 toolchains
2024-02-19 21:09:54 +01:00
Andi
7ec648b4ce
Add --experimental-enabled=high-availability (#1720) 2024-02-19 16:28:15 +00:00
Marko Budiselić
f098a9d5e3
Patch NuRaft for clang-17 compilation (#1733) 2024-02-19 14:50:37 +01:00
Josipmrden
bae3e8a6d3
Add function for property sizes (#1557)
Add function for property sizes
2024-02-19 13:56:01 +01:00
Andi
f3574012c5
Add cpp23 support (#1726) 2024-02-19 10:36:51 +00:00
Gareth Andrew Lloyd
33c400fcc1
Fixup memory e2e tests (#1715)
- Remove the e2e that did concurrent mgp_* calls on the same transaction
  (ATM this is unsupported)
- Fix up the concurrent mgp_global_alloc test to be testing it more precisely
- Reduce the memory limit on detach delete test due to recent memory
  optimizations around deltas.
- No longer throw from hook, through jemalloc C, to our C++ on other
  side. This cause mutex unlocks to not happen.
- No longer allocate error messages while inside the hook. This caused
  recursive entry back inside jamalloc which would try to relock a
  non-recursive mutex.
2024-02-16 15:35:08 +00:00
Marko Budiselić
5ac938a6c9
Remove default assignees from issue-bug template (#1730) 2024-02-16 14:41:53 +01:00
Andi
3e3224f0a2
Forbid having multiple mains in the cluster (#1727) 2024-02-16 11:41:15 +00:00
Antonio Filipovic
bfc756c092
HA: Polish flow for replicas from coordinator (#1711) 2024-02-16 10:58:01 +01:00
Marko Barišić
5f2e3f01d0
Turn e2e tests back on for release build workflows (#1725) 2024-02-15 16:20:04 +01:00
Marko Barišić
2c774ff09b
Add rules for rc workflows (#1722) 2024-02-15 15:33:14 +01:00
Andi
20b47845f0
Forbid writing to cluster-managed main on restart (#1717) 2024-02-15 14:07:04 +01:00
Andi
fb281459b9
Add support for unregistering replication instances (#1712) 2024-02-14 14:24:59 +00:00
Andi
3a7e62f72c
Forbid branching when registering replica in auto-managed cluster (#1709) 2024-02-14 08:02:51 +00:00
Gareth Andrew Lloyd
f48151576b
System replication experimental flag (#1702)
- Remove the compile time control
- Introduce the runtime control flag

New flag `--experimental-enabled=system-replication`
2024-02-13 12:57:18 +00:00
Andi
4a7c7f0898
Distributed coordinators (#1693) 2024-02-13 08:49:28 +00:00
Ivan Milinović
7688a1b068
Fix unbound variable causing crash inside subquery (#1710) 2024-02-13 01:10:03 +01:00
Antonio Filipovic
4f4a569c72
Revert replication tests (#1707) 2024-02-12 16:42:57 +01:00
Ivan Milinović
a511e63c7a
Fix memory tracker counting wrong after OOM (#1651) 2024-02-11 20:29:06 +01:00
DavIvek
0133673f1d
Add support for query params in load csv (#1653) 2024-02-09 18:26:27 +01:00
DavIvek
786cdea260
Fix go driver test (#1708) 2024-02-09 17:07:30 +01:00
Antonio Filipovic
54f78f9217
Revert e2e tests and remove flaky ones (#1703) 2024-02-09 12:55:31 +01:00
Marko Barišić
dcdbd0a19a
Fix primary urls (#1700) 2024-02-08 14:19:30 +01:00
Andi
cf80687d1d
HA: Organize Raft coordinator group (#1687) 2024-02-08 09:11:33 +00:00
Aidar Samerkhanov
2fa8e00124
Fix accumulated path evaluation in builtin algorithms. (#1642)
Fix accumulated path evaluation in DFS, BFS, WeghtedShortestPath and AllShortestPath algorithm.
2024-02-08 10:48:54 +04:00
Antonio Filipovic
c15b62a88d
HA: Disable replication from old main (#1674) 2024-02-07 11:20:47 +01:00
Gareth Andrew Lloyd
4ef6a1f9c3
Improve memory handling of Deltas (#1688)
- Reduce delta from 104B to 80B
- Hold and pass them around as in a deque
- Detect and deleted deltas within commit if safe to do so
2024-02-06 18:07:38 +01:00
andrejtonev
7ead00f23e
Adding authentication data replication (#1666)
* Add AUTH system tx deltas
* Add auth data RPC and handlers
* Support multiple system deltas in a single transaction
* Added e2e test
* Bugfix: KVStore segfault after move

---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-02-05 10:37:00 +00:00
Marko Budiselić
c46dad18fe
Add RocksDB ADR (#1659) 2024-02-03 19:21:13 +01:00
Andi
cb7b88ad92
HA: Support restart of instances (#1672) 2024-02-01 11:55:48 +01:00
Marko Barišić
b443934b68
Update release CI job timeouts (#1683)
* Set timeout for jobs in release CI to 60 minutes

* Set timout for stress test large to 12 hours
2024-01-31 19:19:05 +01:00
Andi
6ab4235cc9
Add NuRaft library (#1678) 2024-01-31 13:13:59 +01:00
Marko Barišić
a9ef28c68e
Upgrade deprecated github actions (#1673)
* upgrade actions/checkout from v3 to v4

* upgrade actions/upload-artifact from v3 to v4

* upgrade actions/download-artifact from v2 and v3 to v4

* Fix duplicate artifact names in diff.yaml

* Fix duplicate artifact names in release_debian10.yaml and release_ubuntu2004.yaml
2024-01-30 22:52:56 +01:00
Marko Barišić
79361e9205
Disable e2e tests in all workflows (#1677)
* Disable e2e tests in diff.yaml

* Disable e2e tests in release builds
2024-01-30 19:37:00 +01:00
Gareth Andrew Lloyd
97b1e67d80
Fix auth durability (#1644)
* Change auth durability to store hash algorithm
* Add Salt to SHA256
2024-01-30 18:17:05 +00:00
Andi
78a88737f8
HA: Add automatic failover (#1646)
Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
2024-01-29 15:34:00 +01:00
andrejtonev
ff44d68843
Simplify auth::Auth (#1663)
Moved various auth flags under a single config
Moved all regex logic under auth::Auth
2024-01-29 12:52:32 +00:00
Marko Barišić
f1484240a0
Make release CI predictable (#1658)
* Move stress test large to a separate workflow
* Combine Debian10 and Ubuntu20.04 stress tests into one workflow
* Remove BigMemory tag from release_debian10
* Move e2e tests to a separate job
* Move debug interation tests to a separate job
* Move release durability and stress tests to a separate job
* Move release benchamarks to separate job
* Add 90 min timeout restriction to all jobs
* Move env variables to workflow level
* Move BUILD_TYPE env var to workflow level
---------

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2024-01-26 17:01:25 +01:00
Marko Barišić
1c95c3dc59
Add step to refresh jepsen cluster before test (#1667)
refresh jepsen during diff workflow
2024-01-26 10:50:03 +00:00
Gareth Andrew Lloyd
9f7118d893
Performance tuning based on stress test (#1572)
Minor changes that speedup the large stress test.
Also now uses a stop token for a more productive shutdown. No need to wait for expensive GC runs.
2024-01-25 17:14:58 +00:00
Andi
38ade99652
HA: Add coordinator to replication cluster (#1608) 2024-01-24 13:07:51 +01:00
DavIvek
6706ebfa2b
Add support for query parameters in return limit (#1654) 2024-01-23 19:17:27 +01:00
Gareth Andrew Lloyd
e7f6a5f4f4
Fix SkipList iterators (#1635)
Fix SkipList iterators and find methods to be as expected by normal C++ iterator usage
2024-01-23 15:31:28 +00:00
andrejtonev
071df2f439
Replication refactor part 7 (#1550)
* Split queries into system and data queries
* System queries are sequentially executed and generate separate transaction deltas
* System transaction try locks for 100ms
* last_commited_system_ts saved to DBMS durability
* Replicating CREATE/DROP DATABASE
* Sending a system snapshot if REPLICA behind
* Passing a copy of the gatekeeper::access as std::any to all functions that could call an async execution
* Removed delete_on_drop flag (we now always delete on drop)
* Using UUID as the directory name for databases
* DBMS durability update (added versioning and salient information)
* Automatic migration from previous version
* Interpreter can run some queries without a target database
* SHOW REPLICA returns the status of the currently active DB
* Returning UUID instead of db name in the RPC responses
* Using UUIDs for database specification in RPC (not name)
* FrequentCheck forces update on reconnect
* TimestampRpc will detect if a replica is behind, and will update client's state
* Safer SLK reads
* Split SHOW DATABASES in two SHOW DATABASES (list of current databases) and SHOW DATABASE a single string naming the current database

---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2024-01-23 12:06:10 +01:00
Gareth Andrew Lloyd
7f10636470
Bugfix don't use _Py_IsFinalizing (#1657)
This is an unstable function and would bump our dependency to python 3.7
2024-01-22 16:14:41 +00:00
Marko Barišić
76589903a4
Update BSL license change date (#1656) 2024-01-21 22:33:46 +01:00
Marko Budiselić
a8b625d861
Add NuRaft ADR (#1634)
Co-authored-by: Andi <andi8647@gmail.com>
2024-01-19 22:30:51 +01:00
andrejtonev
9c89fce249
Bugfix: Shutdown blocks due to wrong execution order (#1649)
* Bugfix: Destorying settings before stopping license checker
* Bugfix: Python GC running while shutting down
2024-01-19 17:05:47 +00:00
Marko Barišić
5e5f215be4
Add steps to bring kafka and pulsar up and down on daily builds (#1643) 2024-01-17 14:44:18 +01:00
Ivan Milinović
23dff58d22
Improve memory tracking (#1631) 2024-01-14 11:14:46 +01:00
Marko Budiselić
0a7a7bc0d1
Add Tantivy ADR (#1633) 2024-01-13 08:43:33 +01:00
Aidar Samerkhanov
c772cab766
ToString function now returns double values with precision 15 (#1576)
The DoubleToString function has been updated to handle higher precision doubles correctly. The unnecessary string length restriction has been removed, allowing the function to convert the full double value without prematurely truncating it. This change ensures that the string representation of doubles is more accurate, especially for very large or very small numbers. Unit tests have been added to verify the correct behavior for a range of double values.
2024-01-12 12:32:34 +04:00
Aidar Samerkhanov
2e4d27c59a
Add List Pattern Comprehension grammar. (#1588) 2024-01-11 18:20:21 +04:00
DavIvek
31f15b3651
Fix index hints (#1606) 2024-01-11 10:10:06 +01:00
DavIvek
b3d0c2ccc2
Add query parameters support for labels (#1602) 2024-01-10 15:08:21 +01:00
DavIvek
d4bcdb77ad
Fix using path identifier after CREATE (#1629) 2024-01-10 11:46:20 +01:00
Ivan Milinović
1ba2f4e619
Fix flaky GC test (#1619) 2024-01-10 00:11:29 +01:00
DavIvek
4bb9238679
Add support for triggers in database dump (#1610) 2024-01-09 13:05:54 +01:00
DavIvek
bd11266f82
Extend ToBoolean function (#1620) 2024-01-08 13:17:55 +01:00
hal-eisen-MG
57e40a2b18
Initial ADR guidance (#1617)
* Initial ADR guidance

* Clarified who decides whether an ADR is required; and fixed a typo

* Removed old folder location
2024-01-05 09:58:00 -08:00
Gareth Andrew Lloyd
0fb8e4116f
Fix REPLICA timestamps (#1615)
* Fix up REPLICA GetInfo and CreateSnapshot

Subtle bug where these actions were using the incorrect transactional
access while in REPLICA role. This casued timestamp to be incorrectly
bumped, breaking REPLICA from doing replication.

* Delay DNS resolution

Rather than resolve at endpoint creation, we will instread resolve only
on Socket connect. This allows k8s deployments to change their IP during
pod restarts.

* Minor sonarsource fixes

---------
Co-authored-by: Andreja <andreja.tonev@memgraph.io>
Co-authored-by: DavIvek <david.ivekovic@memgraph.io>
2024-01-05 16:42:54 +00:00
Ivan Milinović
7128e1cea8
Fix storage mode flag (#1609) 2024-01-04 20:48:34 +01:00
Andi
4788a633a6
Improve e2e Kafka and Pulsar testing (#1604) 2024-01-02 13:29:25 +01:00
Marko Barišić
ce2705d012
Remove centOS8 daily build (#1587) 2024-01-02 12:14:25 +01:00
Ivan Milinović
686fadf072
Fix slow python QM (Python GC changes) (#1558) 2023-12-27 11:51:10 +01:00
Andi
9e76021b94
Remove usage of RTLD_DEEPBIND and add PIC (#1554) 2023-12-22 09:16:06 +01:00
Antonio Filipovic
cd37de481e
Add atomic memory block around unsafe code blocks (#1589) 2023-12-21 09:43:16 +01:00
Andi
f11b3c6d9d
Fix Kafka's NoBrokersAvailableInfo issue (#1578) 2023-12-20 20:03:06 +01:00
Antonio Filipovic
4ef86efb6f
Fix memgraph crash on telemetry server and no file permissions (#1566) 2023-12-19 14:09:43 +01:00
Andi
04fb92dce8
Fix memory bug Alloc vs. Free (#1570) 2023-12-19 11:13:05 +01:00
Ante Pušić
71e76cc980
Fix unresolved host errors by switching to using jemalloc’s primary URL (#1579) 2023-12-18 23:31:00 +01:00
DavIvek
cb4d4db813
Fix schema query module (#1510) 2023-12-18 14:34:21 +01:00
DavIvek
39ee248d34
Fix java drivers test (#1577) 2023-12-18 11:47:24 +01:00
Gareth Andrew Lloyd
b35df12c1a
Cleanup filesystem after e2e tests (#1584) 2023-12-14 13:36:33 +00:00
Gareth Andrew Lloyd
21bbc196ae
Cleanup filesystem after unittest (#1581) 2023-12-13 13:19:01 +00:00
Marko Barišić
375c3c5ddd
Update BSL license change date (#1571) 2023-12-08 10:22:53 +01:00
Antonio Filipovic
340057f959
Add robustness on memory tracker stress test (#1394) 2023-12-08 09:23:20 +01:00
Marko Barišić
e56e516f94
Add BigMemory label to the release build (#1568) 2023-12-07 11:54:43 +01:00
gvolfing
7a9c4f5ec4
Fix logic in RelWithDebInfo mode (#1397)
In and only in RelWithDebInfo mode the access of the maybe_unused
variable results in segfaults, this change is making sure that does no
happen ever if the maybe_unused variable is nullopt without changing the
overall logic.
2023-12-06 22:52:28 +01:00
Antonio Filipovic
eceed274d9
Relax mg assert condition on dealloc (#1492) 2023-12-05 13:44:06 +01:00
Antonio Filipovic
74fa6d21f6
Implement parallel constraints recovery (#1545) 2023-12-04 21:56:05 +01:00
gvolfing
d836b38a8b
Merge pull request #1466 from memgraph/Implement-constant-time-label-and-edge-type-retrieval
Implement constant time label and edge type retrieval

Memgraph now includes two additional queries designed to retrieve
information about the schema of the stored graphs. The SHOW
NODE_LABELS INFO and SHOW EDGE_TYPES INFO queries return
the list of vertex-labels and edge-types that are currently present or at
some point were present in the database respectively. In order for
these queries to work, the flag --storage-enable-schema-metadata has
to be set to True on startup.
2023-12-04 19:56:48 +01:00
gvolfing
eeb9671bac
Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-12-04 18:46:00 +01:00
andrejtonev
e716c90031
Fixed wrong handling of exceptions in SessionHL (#1560) 2023-12-04 18:13:55 +01:00
gvolfing
9690682bc2
Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-12-04 16:18:00 +01:00
Ante Pušić
64e5428d94
Send Bolt success messages only after DB operations run successfully (#1556) 2023-12-04 10:52:00 +01:00
gvolfing
66e86c060f
Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-12-04 09:02:51 +01:00
Aidar Samerkhanov
953a8f5340
Add handling of deleted return values for query procedures and functions ran in analytical mode (#1395)
Co-authored-by: Ante Pušić <ante.pusic@memgraph.io>
2023-12-04 08:32:59 +01:00
gvolfing
31efe28878 Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-12-04 08:00:02 +01:00
Josipmrden
0fb3ae2d56
Fix three match cartesian sequential scanning (#1555) 2023-12-04 00:01:29 +01:00
Josipmrden
46bfeb0023
Fix counting when no matched nodes by property (#1518) 2023-12-03 22:28:26 +01:00
Josipmrden
d58a464141
Remove filter profile info (#1481) 2023-12-03 21:23:52 +01:00
Marko Budiselić
997779fe07
Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-12-02 20:08:14 +01:00
Ante Pušić
3ccd78ac71
Add path and weight to variable expand filter (#1434)
Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2023-12-02 20:03:40 +01:00
Gareth Andrew Lloyd
14f92b4a0f
Bugfix: correct replication handler (#1540)
Fixes root cause of a cascade of failures in replication code:
- Replica handling of deleting an edge is now corrected. Now tolerant of multiple edges of the same relationship type.
- Improved robustness: correct exception handling around failed stream of current WAL file. This now means a REPLICA failure will no longer prevent transactions on MAIN from performing WAL writes.
- Slightly better diagnostic messages, not user friendly but helps get developer to correct root cause quicker.
- Proactively remove vertex+edges during Abort rather than defer to GC to do that work, this included fixing constraints and indexes to be safe.


Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
2023-12-01 12:38:48 +00:00
gvolfing
9f555cf93d Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-30 12:51:13 +01:00
hal-eisen-MG
7fc9b89634
Merge pull request #1530 from memgraph/1529-configure-sonarcloud-for-automatic-analysis
First draft of a sonarcloud properties file
2023-11-28 17:03:06 -08:00
hal-eisen-MG
ed5bdb841f
Merge branch 'master' into 1529-configure-sonarcloud-for-automatic-analysis 2023-11-28 15:23:11 -08:00
gvolfing
b74aee186e Add tests for the retrieval queries 2023-11-28 13:34:21 +01:00
gvolfing
ac0c4193b0 Remove comment 2023-11-28 11:46:32 +01:00
gvolfing
9868dee73b Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-28 10:21:59 +01:00
Antonio Filipovic
bb2a7b8f21
Fix frame change collector incomplete pmr type (#1491) 2023-11-28 10:00:34 +01:00
hal-eisen-MG
6680f99e4d
Merge branch 'master' into 1529-configure-sonarcloud-for-automatic-analysis 2023-11-27 10:00:09 -08:00
Antonio Filipovic
72d47fc3bf
Implement short circuiting of exists evaluation (#1539) 2023-11-27 16:44:12 +01:00
hal-eisen-MG
3cf13e2deb
Merge branch 'master' into 1529-configure-sonarcloud-for-automatic-analysis 2023-11-27 05:40:39 -08:00
Andi
7f5a55f1b2
Fix restarts when using init-file flag (#1465) 2023-11-24 13:11:47 +01:00
gvolfing
08acde3973 Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-24 12:24:20 +01:00
Marko Barišić
70db2fca56
Change package_all to package_memgraph (#1507)
Add the ability to pick a specific package to build
2023-11-23 12:46:04 +00:00
andrejtonev
8b9e1fa08b
Replication refactor part 6 (#1484)
Single (instance level) connection to a replica (messages from all databases get multiplexed through it)
ReplicationClient split in two: ReplicationClient and ReplicationStorageClient
New ReplicationClient, moved under replication, handles the raw connection, owned by MainRoleData
ReplicationStorageClient handles the storage <-> replica state machine and holds to a stream
Removed epoch and storage from *Clients
rpc::Stream proactively aborts on error and sets itself to a defunct state
Removed HandleRpcFailure, instead we simply log the error and let the FrequentCheck handle re-connection
replica_state is now a synced variable
ReplicaStorageClient state machine bugfixes
Single FrequentCheck that goes through DBMS
Moved ReplicationState under DbmsHandler
Moved some replication startup logic under the DbmsHandler's constructor
Removed InMemoryReplicationClient
CreateReplicationClient has been removed from Storage
Simplified GetRecoverySteps and made safer

---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2023-11-23 11:02:35 +01:00
Gareth Andrew Lloyd
e4f94c15c6
Fixes for clang-tidy / sonar issues (#1536) 2023-11-22 13:05:02 +00:00
hal-eisen-MG
0ce6dcf194
Merge branch 'master' into 1529-configure-sonarcloud-for-automatic-analysis 2023-11-21 07:31:17 -08:00
Andi
1d90b60f56
Add schema.assert (#1485) 2023-11-21 09:19:50 +01:00
hal-eisen-MG
108d83cc8c
Merge branch 'master' into 1529-configure-sonarcloud-for-automatic-analysis 2023-11-20 09:33:03 -08:00
Hal Eisen
eff857447a Refine scope to pull in 'include' and 'query_module' directories, in addition to src 2023-11-20 08:16:35 -08:00
Andi
d03fafcef6
Aggregations return empty result when used with group by (#1531) 2023-11-20 11:52:17 +01:00
Hal Eisen
c31a7f9648 First draft of a sonarcloud properties file 2023-11-17 17:25:11 -08:00
imilinovic
6053a91ef8
Fix flaky GC test (#1521) 2023-11-17 17:06:46 -05:00
Antonio Filipovic
645568a75b
Remove default memory limit on procedures (#1506)
* remove default limit on procedures
* fix bug on GraphQL also
2023-11-16 15:01:44 +01:00
Antonio Filipovic
d3f4c35362
Add OOM enabler for MG procedure (#1401) 2023-11-15 12:42:04 +01:00
Josipmrden
c037cddb0e
Add granular index and constraint recovery info (#1480) 2023-11-14 17:23:06 -05:00
imilinovic
ced08fd7bc
Fix GC by adding periodic jemalloc purge (#1471) 2023-11-14 15:06:21 -05:00
gvolfing
a370b09d12 Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-14 13:13:16 +01:00
gvolfing
1527bdf435 Make metadata collection setable with flag
There might be a performance impect of updating the metadata store on
bulk operations. Hence this flag which is disabling the collection by
default. If the queries to obtain the information are called with this
flag disabled, the database will throw an exception.
2023-11-14 13:10:08 +01:00
Marko Barišić
9cc060c4b0
Fix error in upload-to-s3 job (#1504) 2023-11-13 13:01:01 +01:00
Marko Barišić
e671a0737e
Fix package specific workflow file (#1503) 2023-11-13 12:54:19 +01:00
Marko Barišić
11be3972c4
Add workflow for packaging memgraph for specific target OS (#1502) 2023-11-13 12:20:49 +01:00
DavIvek
fdab42a023
Use static linking on c++ query modules for glibcxx (#1490) 2023-11-13 12:08:48 +01:00
Andi
e5b2c19ea2
Empty Collect() returns nothing (#1482) 2023-11-13 11:45:09 +01:00
Josipmrden
e907817854
Fix for in list segmentation fault (#1494) 2023-11-13 05:17:10 +01:00
Josipmrden
0756cd6898
Add fix indexed join crash (#1478) 2023-11-12 22:12:25 -05:00
Josipmrden
38ad5e2146
Fix parallel index loading (#1479) 2023-11-12 23:51:00 +01:00
Josipmrden
3c413a7e50
Fix hash join expression matching (#1496) 2023-11-12 14:45:02 -05:00
Antonio Filipovic
17915578f8
Fix race condition and arena tracking bug (#1468) 2023-11-09 18:56:36 +01:00
gvolfing
df3274d78f Make the metadata storing objects threadsafe
The objects stored_node_labels_ and stored_edge_types_ can be accesses
through separate threads but it was not safe to do so. This commit
replaces the standard containers with threadsafe ones.
2023-11-08 14:43:06 +01:00
Marko Barišić
4e9a036881
Fix v2.12 release pipeline (#1445) 2023-11-08 07:09:02 -05:00
gvolfing
2946d74fdd Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-08 07:48:15 +01:00
DavIvek
c8fe9ee7d9
Fix accessing a variable bound to a list within BFS function (#1380) 2023-11-07 20:34:50 +01:00
Ante Javor
e4afddf518
Fix compare results in mgbench (#1319) 2023-11-07 17:04:37 +01:00
Antonio Filipovic
4d5ea03dfa
Use extent hooks for memory procedure limit (#1443) 2023-11-07 16:04:29 +01:00
DavIvek
ece4b0dba8
Fix cached plan not getting invalidated (#1348) 2023-11-07 13:34:03 +01:00
gvolfing
eb4ebab438 Merge branch 'master' into Implement-constant-time-label-and-edge-type-retrieval 2023-11-07 12:29:22 +01:00
gvolfing
260d60451d Modify retrieval function signatures
Before the functions that are retrieving the data from the metadata
holding datastructures were returning a std::string, and that was
propagated outward all the way through. To keep this functions
consistent with the rest of the storage/dbaccessor functions the LabelId
and EdgeTypeId will be propagated respectively and the conversion into
string will only happen at the interpreter level.
2023-11-07 12:07:52 +01:00
Andi
66487a6dce
Durability improvements (#1385) 2023-11-07 11:37:54 +01:00
gvolfing
c4d9116c9c Add queries to obtain the labels and edge types
Add two queries to be able to retrieve the labels and edge types this is
done through additions to the DatabaseInfoQuery query types.
2023-11-07 09:35:28 +01:00
Andi
f4b97fc03d
Fix missing statistics for SetPropertiesCursor (#1460) 2023-11-07 09:11:20 +01:00
Antonio Filipovic
58648d1a70
Revert license sender info (#1461) 2023-11-06 16:57:09 +01:00
Antonio Filipovic
1ab7f6ac78
Add notification for user on max map count (#1408) 2023-11-06 15:44:26 +01:00
andrejtonev
dbc6054689
Replication refactor (part 5) (#1378) 2023-11-06 11:50:49 +00:00
gvolfing
50c485fe40 Add storage side capabilites to retrieve metadata
In order to get the required metadata in constant time we need to keep
track of the node labels and edge types that were ever present in the
database. This is done by the two axuiliary datastructures that are
present in the storage instances. The ability to get this metadata is
propagated to the DBAccessor class, which the query modules can interact
with.
2023-11-06 12:37:48 +01:00
Aidar Samerkhanov
16b8c7b27c
Fix Kafka flaky unit test (#1409) 2023-11-05 20:51:56 +01:00
Antonio Filipovic
48631d1e37
Rename memory usage and memory allocated (#1426) 2023-11-03 14:40:45 +01:00
Antonio Filipovic
93e6d058d2
Remove all_shortest paths unnecessary logs (#1425) 2023-11-03 12:46:06 +01:00
Andi
3e9f25b8e4
Support creating date and localtime from localdatetime (#1381) 2023-11-03 10:54:01 +01:00
Andi
c94201621a
Support deleting paths (#1383) 2023-11-02 14:07:48 +01:00
Andi
fdbc390d53
Throw when reduce inside exists (#1392) 2023-11-02 12:18:15 +01:00
Andi
5e6c5618f5
Make FrameChangeCollector owning the memory resource (#1398) 2023-11-02 09:54:39 +01:00
Andi
4aacd45640
Throw when exists() combined with CASE (#1382) 2023-11-02 08:25:34 +01:00
Gareth Andrew Lloyd
157b36162b
Speedup socket unit test (#1444)
Was testing a setup that wasn't used in production, it would
unnecessarily thrash small buffers.
2023-11-01 17:24:24 +00:00
Marko Barišić
af4fdef029
Change license date before release (#1433) 2023-10-30 13:01:14 +01:00
Josipmrden
5b9802bd7b
Extend property cache to the expression evaluator (#1432)
* Add support for property cache in the produce
* Fix the previous implementation in the map literal
2023-10-28 20:32:58 -07:00
Ante Pušić
b1c3168308
Fix PROFILE infinite loop (#1431) 2023-10-28 15:34:52 +02:00
Andi
011caf3bf1
Fix mgbench daily upload (#1410) 2023-10-28 12:52:14 +02:00
Andi
b1bd977f7b
Fix GQL release worklflows (#1424) 2023-10-28 12:50:16 +02:00
gvolfing
c296dc67ce
Add index count to index info (#1229) 2023-10-27 18:13:05 +02:00
Ante Pušić
989bb97514
Extend Cypher queries with the index hinting feature (#1345) 2023-10-27 14:26:19 +02:00
Marko Barišić
a94588bde3
Revert CMake default build type error (#1420) 2023-10-27 08:58:37 +02:00
Marko Barišić
e9f3a5fd1b
Fix default value in release workflows (#1418) 2023-10-26 16:17:16 +02:00
Marko Budiselić
80e1fba8f5
Update bug_report.md template (#1400) 2023-10-26 13:09:16 +02:00
Ante Pušić
3158a16ffd
Add filtering details to EXPLAIN and PROFILE (#1265) 2023-10-25 21:36:20 +02:00
Matija Pintarić
411f8c9d56
Move essential query modules from MAGE to Memgraph (#1384)
* schema.cpp
* mgps.py
* convert.py
2023-10-25 18:27:44 +02:00
Antonio Filipovic
a84f570c6d
Use extent hooks for per query memory limit (#1340) 2023-10-25 16:01:59 +02:00
Josipmrden
3d4d841753
Add constraint verification update only on necessary actions (#1341) 2023-10-25 16:01:02 +02:00
Antonio Filipovic
2426d7980d
Add OOM enabler in operator tree (#1379) 2023-10-25 12:16:11 +02:00
Josipmrden
7ef10dd82a
Fix gql behave dropping connection on Memgraph (#1399) 2023-10-25 10:59:02 +02:00
Gareth Andrew Lloyd
5b91f85161
Improve storage GC (#1387) 2023-10-24 23:41:21 +02:00
Antonio Filipovic
0d9bd5554c
Fix potential bug on memory pool (#1299) 2023-10-24 22:31:36 +02:00
Josipmrden
e617ff9b59
Provide textual information for inefficient plans with notifications (#1343) 2023-10-24 22:20:05 +02:00
Josipmrden
be16ca7362
Add cartesian and hash join operators (#1193) 2023-10-24 21:54:42 +02:00
Josipmrden
fdf63436ab
Add cartesian and hash join mgbench (#1393) 2023-10-24 19:44:11 +02:00
Josipmrden
4e8148f7d9
Add retry logic possible when conflicting transactions (#1361) 2023-10-24 19:43:23 +02:00
imilinovic
1f118e7521
Add renaming of edge types (#1364) 2023-10-24 17:12:09 +02:00
Marko Damjanić
9803f47828
Update contributing part in the README (#1224) 2023-10-24 14:38:06 +02:00
DavIvek
98680b04c9
Add DNS support for cluster replica address (#1323) 2023-10-24 13:11:36 +02:00
Josipmrden
1d45016217
Add values and keys function to map (#1246) 2023-10-24 06:19:20 +02:00
Matija Pintarić
97ed912ab6
Implement map key exists in mgp (#1336) 2023-10-23 15:29:41 +02:00
gvolfing
aec4c3dd2b
Fix bug in alias mappings (#1252) 2023-10-23 13:07:46 +02:00
Antonio Filipovic
7f7f3adfcb
Implement jemalloc extent hooks memory tracker (#1250)
Should improve/fix memory usage exceeds --memory-limit issues
2023-10-23 12:48:26 +02:00
andrejtonev
26e31ca06f
Fix SHOW CONFIG to show the run-time flag status (#1278) 2023-10-23 10:18:07 +02:00
DavIvek
3ff2c72db9
Fix crash caused by deleting non-existing edge in DETACH DELETE (#1355) 2023-10-23 08:36:28 +02:00
Andi
af56ab6ea8
Forbid changing isolation level for disk and analytical (#1367)
Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2023-10-23 06:02:56 +02:00
Marko Budiselić
945388fba6
Revert to Release under Diff Release workflow (#1390) 2023-10-22 22:31:06 +02:00
Marko Budiselić
b6693a7df0
Improve systemd config (#1288)
* Add a comment on how to restart automatically
* Add comment on how to deal with out-of-memory (OOM)
2023-10-22 18:44:24 +02:00
Aidar Samerkhanov
667e7f670e
Parametrize CI workflows build type (#1324)
The Release is the default, it's also possible to run package_all with
RelWithDebInfo
2023-10-17 00:04:08 +02:00
andrejtonev
a100b900c5
Fix DB configuration error under expansion gbench (#1368) 2023-10-16 23:12:49 +02:00
Gareth Andrew Lloyd
64bf75117b
Fix subtle replication bug (#1370)
When going from REPLICA to MAIN, need to ensure current WAL files are
finalised.
2023-10-16 22:56:56 +02:00
Marko Budiselić
9524a51576
Add v5 toolchain (#608)
* clang 17.0.2
* gcc 13.2
* upgrade libs
* tmp disable gpg check, tmp disable fblibs
2023-10-16 19:01:39 +02:00
Marko Budiselić
fd10d1c9f8
Fix CPU 100% usage by websocket error handling improvement (#1327) 2023-10-16 15:41:12 +02:00
andrejtonev
22d8ef75e0
Updated telemetry client-side (#1337) 2023-10-16 14:16:00 +02:00
Andi
7b0bafa21e
Add human readable memory allocations in show storage info (#1366) 2023-10-16 11:35:44 +02:00
Andi
de9280b334
Refactor disk storage (#1347) 2023-10-16 09:11:07 +02:00
Kruno Golubic
766ac48261
Update banner in README (#1344) 2023-10-12 14:27:36 +02:00
Andi
06868c8be7
Run separate GQL suits for different storage modes (#1346) 2023-10-11 11:42:41 +02:00
Andi
1a3c5af797
Improve expansions on disk (#1335)
* Improve disk expansions
2023-10-11 10:18:50 +02:00
Gareth Andrew Lloyd
d278a33f31
Decouple pure replication state from storage [part 1] (#1325)
A major refactor to decouple replication state from storage.
ATM it is still owned by storage but a following part should fix that.
2023-10-10 11:44:19 +01:00
Aidar Samerkhanov
7fbf5857f2
Add GQL behave tests for on-disk storage (#1238) 2023-10-10 09:27:11 +03:00
DavIvek
0d51a20a02
Fix a crash caused by declaring a path with only one node in OPTIONAL MATCH clause (#1318) 2023-10-09 15:25:25 +02:00
DavIvek
3143c986de
Fix crash caused by using exists() in a RETURN statement (#1303) 2023-10-09 11:31:49 +02:00
Andi
2fd34489af
Add mgbench support for disk storage and analytical mode (#1286)
* Add mgbench support for disk storage and analytical mode
2023-10-06 10:19:29 +02:00
Gareth Andrew Lloyd
3cc2bc2791
Refactor interpreter to support multiple distributed clocks (Part 1) (#1281)
* Interpreter transaction ID decoupled from storage transaction ID
* Transactional scope for indices, statistics and constraints
* Storage::Accessor now has 2 modes (unique and shared)
* Introduced ResourceLock to fix pthread mutex problems
* Split InfoQuery in two: non-transactional SystemInfoQuery and transactional DatabaseInfoQuery
* Replicable and durable statistics
* Bumped WAL/Snapshot versions
* Initial implementation of the Lamport clock

---------

Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
2023-10-05 16:58:39 +02:00
Gareth Andrew Lloyd
d71b6a5007
Refactor replication client/server (#1311) 2023-09-29 11:21:42 +01:00
Andi
61ac7e1b11
Add --storage-mode flag (#1282)
* Add --storage-mode flag
2023-09-26 14:47:30 +02:00
Andi
efdf7baea0
Refactor mgbench 2023-09-22 19:05:16 +02:00
Gareth Andrew Lloyd
eb4e2b019d
Fix distinct, now doesn't impacts other aggregates (#1235)
Before a distinct on one aggregate would impact distinct on another
aggregate. Fixed the logical error and at the same time did some memory
optimisations.
2023-09-20 16:45:55 +01:00
Andi
1553fcb958
Improve deserialization performance
* Change std::stoull to std::from_chars
---------

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2023-09-20 14:25:17 +02:00
andrejtonev
bce48361ca
Decoupling Interpreter from Storage (#1186)
Unique/global InterpreterContext that is Storage agnostic (has a reference to the DbmsHandler instead)

* InterpreterContext is no longer the owner of Storage
* New Database structure that handles Storage, Triggers, Streams
* Renamed SessinContextHandler to DbmsHandler and simplified the multi-tenant logic
* Added Gatekeeper and updated handlers to use it

---------

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2023-09-20 13:13:54 +02:00
imilinovic
404cdf05d3
Add path pop to mgp API (#1249) 2023-09-19 12:37:55 +02:00
Marko Barišić
b719f0744f
Update v2.11 license date (#1247) 2023-09-13 07:19:11 -04:00
Josipmrden
440838c0e9
Add dependency check for e2e tests (#1240) 2023-09-12 11:34:08 -04:00
Josipmrden
79a3c5af8e
Add manual performance benchmark execution (#1239) 2023-09-12 11:33:05 -04:00
Josipmrden
bf03b38e39
Remove gqlalchemy from stress tests (#1245) 2023-09-12 11:32:16 -04:00
gvolfing
fd63944493
Add --query-callable-mappings-path package default (#1203) 2023-09-11 15:12:14 -04:00
Gareth Andrew Lloyd
6694de2dfa
Fix libkrb5 TRUE and FALSE macros leakage (#1243)
Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2023-09-11 12:46:40 -04:00
andrejtonev
5e5f4ffc5d
Add more runtime configurable settings (#1183)
server name, query timeout settings, log.level, log.to_stderr
2023-09-11 11:30:54 -04:00
Ante Pušić
060b9d1c16
[master < T1204] Add detailed operator info to PROFILE and EXPLAIN (#1204) 2023-09-11 14:34:27 +02:00
Ante Pušić
29a505cb38
Property lookup caching (#1168) 2023-09-11 13:03:54 +02:00
Ante Pušić
d4fcd745d2
Sort SHOW INDEX INFO (#1178) 2023-09-11 10:59:41 +02:00
Josipmrden
58546a9fe1
Add detach delete in bulk (#1078) 2023-09-10 18:53:03 +02:00
Gareth Andrew Lloyd
ab56abf4ca
Optimize scanning vertices (#1227) 2023-09-09 10:09:25 -04:00
Gareth Andrew Lloyd
1bd47318cd
Improve PropertyStore (#1142)
Improve AnyVersionHasLabelProperty by doing less work in some instances.
Improve FindSpecificProperty.
2023-09-09 08:00:43 -04:00
Ante Pušić
0403b67073
Fix returning NULL on map projection from a null value (#1119) 2023-09-09 06:43:25 -04:00
DavIvek
9e4babcdbb
Fix segfault based on issue #874 (#1175) 2023-09-09 02:04:46 -04:00
Antonio Filipovic
b094fdbadc
Fix API bug on accessing deleted object (#1209) 2023-09-08 13:52:21 -04:00
Josipmrden
07dea328d8
[master < T1110] Add merge optimization to expand dynamically during runtime (#1110) 2023-09-08 17:12:25 +02:00
Gareth Andrew Lloyd
bd1852f407
Reduce flake SnapshotFallback test (#1237)
Fixed the wait period, this should ensure at least one snapshot was 
made. Also cleaned up the checking around this. And also better 
corruption.
2023-09-08 14:21:35 +01:00
imilinovic
9c51dbbb01
Implement changing from and to vertices in relationships (#1221) 2023-09-08 12:52:40 +02:00
ind1xa
c0d4f5e0bc
Add traits under mgp iterator (#1210) 2023-09-08 08:57:37 +02:00
Antonio Filipovic
974a6e3027
Fix bug on mgp dispatcher guard (#1225) 2023-09-07 17:42:27 +02:00
Matija Pintarić
d9464c6ffd
Add InDegree and OutDegree in O(1) (#1217) 2023-09-07 13:16:30 +02:00
Ante Javor
312d01bd0c
Remove repeated log lines from TRACE log level. (#1054) 2023-09-06 23:09:51 +02:00
Antonio Filipovic
b6b32bec03
Improve performance of delta creation (#1129) 2023-09-06 11:30:21 +02:00
Antonio Filipovic
93992a275b
Improve NameToId mapper on set properties (#1147) 2023-09-06 00:12:27 +02:00
Andi
b5413c6f82
Add edge import mode into the on-disk storage (#1157) 2023-09-05 19:00:53 +02:00
Josipmrden
09fd5939da
Remove double scan with expand from the planner (#1085) 2023-09-05 11:02:52 +02:00
Josipmrden
02eab6ab9c
Set properties C API extension (#1131)
Add SetProperties into the C++ query module API
2023-09-04 16:17:43 +02:00
Gareth Andrew Lloyd
9661c52179
Introduce a reader writer spin lock (#1187)
It is possible for multiple read only queries to be accessing the same
sequence of vertices/edges. The reader mode of the spin lock will ensure
multiple threads can make progress at the same time.
2023-09-01 14:21:15 +01:00
Gareth Andrew Lloyd
e928eed028
Replication refactor (part 4) (#1211)
More refactoring to isolate generic replication behavior. Making the 
InMemory* types even more decoupled from replication logic.
2023-08-31 16:06:44 +01:00
Josipmrden
eb5167dfef
Add high write set property workload (#1172) 2023-08-31 14:46:35 +02:00
Josipmrden
b952139973
Add supernode performance workload (#1171) 2023-08-30 15:19:52 +02:00
andrejtonev
28dbcd1545
Add disk storage to e2e tests (#1202)
* Add disk storage to e2e tests

---------

Co-authored-by: Andi Skrgat <andi8647@gmail.com>
2023-08-30 13:42:11 +02:00
Matija Pintarić
d516e40841
Add ToString on C++ API mgp types(#1140) 2023-08-29 17:30:23 +02:00
Andi
a6ec81b179
Add deterministic disk vertex_count and edge_count (#1146)
* Add exact vertex_count and edge_count to disk storage
2023-08-29 13:07:23 +02:00
andrejtonev
c526ff2a8f
[master < ] Remove DbAccessor from non-transactional queries (#1201)
* Decouple non-transactional queries from DbAccessor
* Invalidate auth cache after AuthQuery

Co-authored-by: Gareth Lloyd <gareth.lloyd@memgraph.io>
2023-08-29 11:13:42 +02:00
Aidar Samerkhanov
5f509532f2
Add timestamp to DELETE_DESERIALIZED_OBJECT delta at which this object was created. (#1179)
Add timestamp to DELETE_DESERIALIZED_OBJECT delta at which this object was created.
RocksDB currently doesn't provide timestamp() functionality in iterators of TransationDB.
Because of that we are using constant "0" timestamp for DELETE_DESERIALIZED_OBJECT.
2023-08-28 10:56:17 +04:00
Andi
4b3ba908c7
Code improvements on disk storage (#1153)
* Improvements based on a code review

---------

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2023-08-26 14:16:12 +02:00
Andi
030b554ffd
Improve concurrency control for on-disk storage (#1154)
* Remove locking vertices and serialization checks
---------

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2023-08-25 14:42:52 +02:00
Gareth Andrew Lloyd
4bc5d749b2
Refactor replication, part 3 (#1177)
Changes to make replication code agnostic of the storage kind being used.

Co-authored-by: Andreja Tonev <andreja.tonev@memgraph.io>
2023-08-25 10:52:07 +01:00
imilinovic
a426ef9cc3
Add Relationship::RemoveProperty to C++ query module API (#1156) 2023-08-24 12:14:00 +02:00
Ante Pušić
60e167d676
Optimize index and constraint updates (#1159) 2023-08-23 14:52:44 +02:00
Ante Pušić
3f8befde79
Bump PyYAML version (#1174) 2023-08-23 12:48:17 +02:00
andrejtonev
9355e58e73
Decoupling replication logic from InMemoryStorage (#1169) 2023-08-22 13:29:25 +02:00
gvolfing
476968e2c8
Fix concurrent query module race condition (#1158)
Concurrent access to the same query module had a race condition on the
pointer that was used to handle the custom memory management. With this
commit, a mapping has been added to keep information about what
thread used the pointer to handle the memory resources. This should be
fine since the respected query executions are running on a dedicated
thread. Access to the mapping itself is threadsafe. A simple RAII
wrapper for the mapping container has also been added for simpler
client-side use.
2023-08-21 16:45:36 +02:00
Gareth Andrew Lloyd
97183fb9da
Fix FLAGS_delta_chain_cache_threshold typo (#1181) 2023-08-21 13:16:02 +02:00
Gareth Andrew Lloyd
adb65b2fff
Refactor memgraph.cpp (#1164) 2023-08-18 18:23:15 +02:00
Aidar Samerkhanov
3bf2cf65ab
Optimize splitting keys inside the on-disk storage (#1155) 2023-08-17 18:09:21 +02:00
ind1xa
8f3f693f20
Fix duration overflow (#1150) 2023-08-16 13:30:26 +02:00
Gareth Andrew Lloyd
2e51e703c3
Add supernode vertex cache (#1124)
Add supernode vertex cache to account for long delta chains and modifications in the same module being independent of scanning of the nodes in the next iteration of the pulling mechanism.
2023-08-11 10:18:28 +02:00
Andi
adf7533751
Optimize import of edges on disk (#1132) 2023-08-10 11:53:07 +02:00
Antonio Filipovic
509183e985
Improve performance on set properties (#1115) 2023-08-10 09:06:44 +02:00
Aidar Samerkhanov
1fe2190747
Filter deleted edges during edge prefetch (#1145) 2023-08-09 13:56:34 +02:00
Andi
762fe6a65d
Improve disk indices (#1139) 2023-08-09 10:16:49 +02:00
Aidar Samerkhanov
271b1a5ddb
Fix bug with on-disk triggers (#1134)
* Fix TriggerContext adaptation for accessors.
* Fix edge deserialization in case of the deleted vertex.
2023-08-08 10:37:14 +02:00
gvolfing
260660f1dd
Fix sequential label-property index recovery (#1135)
The parallel_exec_info should have been passed to this function before,
otherwise, the recovery of label-property indices would never have been
parallelized.
2023-08-05 23:20:15 +02:00
Marko Budiselić
e5350a011c
Upgrade to mgconsole v1.4.0 (#1144) 2023-08-05 15:52:31 +02:00
Kruno Golubic
7bf827bb1e
Remove link to Discourse forum from README (#1138) 2023-08-05 14:37:07 +02:00
Marko Budiselić
5d13c281fa
Update v2.10 license date (#1133) 2023-08-02 15:13:15 +02:00
Marko Budiselic
020273f475 Increase package all ARM timeout 2023-08-02 07:53:31 +00:00
andrejtonev
4a99625287
Fix throwing if user was created without a license (#1067) 2023-08-01 23:36:12 +02:00
andrejtonev
5bbed6ef9a
Implement user caching to speed up PullPlan (#1109) 2023-08-01 23:04:35 +02:00
Andi
f0bac53e7b
Improve restore replication role (#1089) 2023-08-01 21:51:52 +02:00
Matija Pintarić
514fed51c4
Add implementation of C++ API Node::RemoveProperty (#1128) 2023-08-01 20:11:38 +02:00
imilinovic
2877c343e8
Add implementation of << operator for mgp::Value (#1127) 2023-08-01 19:30:23 +02:00
ind1xa
50a1d1abb3
Add remove label to the query modules C API (#1126) 2023-08-01 19:24:11 +02:00
andrejtonev
e8850549d2
Add multi-tenancy v1 (#952)
* Decouple BoltSession and communication::bolt::Session
* Add CREATE/USE/DROP DATABASE
* Add SHOW DATABASES
* Cover WebSocket session
* Simple session safety implemented via RWLock
* Storage symlinks for backward. compatibility
* Extend the audit log with the DB info
* Add auth part
* Add tenant recovery
2023-08-01 18:49:11 +02:00
andrejtonev
fd819cd099
Add custom e2e test activation to include the toolchain libs (#1130) 2023-08-01 17:13:47 +02:00
Andi
60f4ffc6a1
Improve logging if replica cannot recover using curr WAL file (#1086) 2023-08-01 10:33:46 +02:00
Andi
bd2ec6374a
Remove node argument from start-stop Jepsen functions (#1091) 2023-08-01 00:06:57 +02:00
Marko Budiselić
c501f59a09
Add more info to the query module error messages (#771) 2023-08-01 00:05:44 +02:00
gvolfing
210bea83d4
Add GraphQL transpilation compatibility (#1018)
* Add callable mappings feature
* Implement mgps.validate (void procedure)
* Make '_' a valid variable name
2023-07-31 14:48:12 +02:00
Josh Soref
57fe3463f2
Fix a bunch of spelling mistakes (1/n) (#1112) 2023-07-30 14:05:05 +02:00
Tyler Neely
53fcd8ac4d
Add manual .py test verifying isolation levels (#407) 2023-07-30 14:04:26 +02:00
Tyler Neely
259cba5d43
Add manual .py test asserting forward progress across reconnects (#408) 2023-07-30 14:02:59 +02:00
Antonio Filipovic
285b409927
Improve all shortest paths memory usage (#981)
* Change allocator to PoolResource
2023-07-30 12:58:07 +02:00
Gareth Andrew Lloyd
8ebab84324
Add handling of partial results on timeout (#1046) 2023-07-30 10:48:11 +02:00
imilinovic
3fd9ce4a33
Add mgp::map erase and update (#1103) 2023-07-30 08:36:50 +02:00
Andi
be4eb95a98
Fix Jepsen replication pause (#1082) 2023-07-30 02:36:11 +02:00
Andi
903a9f4636
Set explicit node config for jepsen tests (#1088) 2023-07-30 00:49:48 +02:00
Andi
18bd02423a
Fix PropertyStore buffer serialization (#1111) 2023-07-29 19:14:27 +02:00
andrejtonev
58c0c4cebb
Add missing-field-initialization warning flag (#1113) 2023-07-29 17:59:11 +02:00
andrejtonev
110ca3968c
Fix path generation ignores edge's element_id (#1108) 2023-07-29 15:51:51 +02:00
Andi
9072fb7703
Fix flaky transactional queue e2e test (#1102) 2023-07-29 11:11:27 +02:00
Matija Pintarić
2b7707a2f1
Add non-const return value for mgp::Value subtypes (#1099) 2023-07-28 12:26:10 +02:00
Matija Pintarić
76ca019f31
Add overload of operator< for mgp::Value (#1090) 2023-07-28 11:35:14 +02:00
imilinovic
609b9a20f1
Add hash on mgp::Value (#1093) 2023-07-28 09:08:36 +02:00
Matija Pintarić
e489e4f3e7
Extend insert on Record to accept mgp::Value (#1094) 2023-07-27 10:33:32 +02:00
Matija Pintarić
ab4d1efe0b
Overload << operator for mgp::Value::Type (#1080) 2023-07-25 13:30:02 +02:00
Bruno Sačarić
036da58d30
Add standalone upload to S3 workflow (#866) 2023-07-22 19:21:17 +02:00
Marko Budiselić
919f07fae1
Add macOS support under environment/util.sh:operating_system (#1098)
* Fix release/package ubuntu-22.04 amd64 Dockerfile
2023-07-22 19:20:10 +02:00
Marko Budiselić
ca1e98ad94
Add package/release/run.sh build (#1060)
* Optimize `memgraph/memgraph-builder` (on Dockerhub) image size
* Optimize `mgbuild_{{ os }}` (under CI) image size
2023-07-22 18:00:38 +02:00
Marko Budiselić
f0aca2d23b
Improve init script logging (#1096) 2023-07-22 13:08:39 +02:00
Marko Budiselić
ec9840fff2
Fix custom maven setup (#1044) 2023-07-22 12:29:36 +02:00
Marko Budiselić
992e718a97
Fix v2.9 release build issues (#1087)
* Fix include order inside `tests/unit/query_variable_start_planner.cpp` fails on an Rhel7 based OS
2023-07-20 16:15:27 +02:00
Marko Budiselić
2ec6b7f40b
Update v2.9 license date (#1081) 2023-07-20 10:56:31 +02:00
Andi
05eca46267
Fix flaky transaction_queue_multiple unit test (#1077) 2023-07-19 22:58:02 +02:00
Vlasta
fae039c215
Improve logging for the on-disk storage (#1079) 2023-07-19 22:54:42 +02:00
Marko Budiselić
3b9133fd5a
Improve e2e and replication testing setup (#1061)
* Add `--replication-restore-state-on-startup` with `false` as default

Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
Co-authored-by: Andi Skrgat <andi8647@gmail.com>
2023-07-19 21:18:43 +02:00
Marko Budiselić
9d056e7649
Add experimental/v1 of ON_DISK_TRANSACTIONAL storage (#850)
Co-authored-by: Andi Skrgat <andi8647@gmail.com>
Co-authored-by: Aidar Samerkhanov <aidar.samerkhanov@memgraph.io>
2023-06-29 11:44:55 +02:00
Antonio Filipovic
aa4f68a37d
Add error handling on py batched module init (#1052) 2023-06-28 17:23:42 +02:00
Josipmrden
84721f7e0a
Add vertex degree to index statistics (#1026)
Add graph analysis of vertex degrees when doing ANALYZE GRAPH.
2023-06-27 18:06:20 +02:00
Ante Javor
261aa4f49b
Improve replication logging (#1030) 2023-06-27 17:57:51 +02:00
Josipmrden
5ce1526995
Fix map at operator returning null from a map (#1039)
Fix map returning null from a map and not throwing exception if key is not present
2023-06-27 15:04:12 +02:00
Antonio Filipovic
cb843ee664
Fix multiple include error on mgp.hpp (#1043) 2023-06-27 10:07:33 +02:00
Andi
0f1ca745e5
Improve connection handling in tests/e2e (#1012) 2023-06-26 22:43:34 +02:00
Gareth Andrew Lloyd
3b781bf525
Add HTTP+GZIP support to LOAD CSV (#1027) 2023-06-26 19:10:48 +02:00
Antonio Filipovic
d573eda8bb
Add python & cpp batching option in procedures
* Add API for batching from the procedure 
* Use PoolResource for batched procedures
2023-06-26 15:46:13 +02:00
Marko Budiselić
00226dee24
Improve setup when memgraph is a git submodule (#1038) 2023-06-26 12:27:58 +02:00
Marko Budiselić
546bfc0ede
Fix package workflow on ARM (#1040) 2023-06-26 11:19:13 +02:00
Gareth Andrew Lloyd
5b1ba10183
Fix IN_MEMORY_ANALYTICAL storage GC (#1025) 2023-06-23 12:50:03 +02:00
Vlasta
b25e9968ee
Update links inside CSV import tool (#834) 2023-06-22 16:00:22 +02:00
Vlasta
bcd23fe3cb
Update CSV import tool error docs 2023-06-22 14:44:46 +02:00
Katarina Supe
68e5610566
Fix replica exception message (#930) 2023-06-22 14:41:59 +02:00
Ante Javor
0ea96663ba
Add check for opening snapshots (#966) 2023-06-22 13:29:49 +02:00
Marko Budiselić
da17fe92d6
Update package_docker by adding --pull (#1032)
Each release will pull the latest base image because we want to include any new
security patches.
2023-06-22 12:12:31 +02:00
Marko Budiselić
e73eac77a9
Improve libstdc++ dependency on RPM systems (#863) 2023-06-22 10:14:59 +02:00
Marko Budiselić
d51a61fc5f
Update dependencies under environment/os (#862) 2023-06-21 23:14:37 +02:00
Josipmrden
b875649270
Add restoring of replication roles upon database startup (#791)
Fix replica node restoration on startup so it is restored as replica and not as main.
2023-06-21 19:08:58 +02:00
Josipmrden
05cc35bf93
Add command NULLIF for identifying nulls in LOAD CSV (#914)
Add NULLIF command which turns all row values corresponding to the string to the nullif character sequence.
2023-06-21 14:50:46 +02:00
Josipmrden
63f8298033
Fix MATCH + LOAD CSV to load CSV only once (#916)
* update profile query to use poolresource
* Optimize update of indexes
* Add ignore empty strings to load csv
* Add operator changes to support handling of nulls
* Store chunks in memory pools ordered
* Use same max block per chunks number
* Remove redundant return statement
* add hacky cached solution
* change map to set
* remove memory
* Add match load csv invalid behaviour commit
* Accept input on LOAD CSV
* Ommit changes not tied to the PR
* Add tests for match + load csv
* Add gqlalchemy installation for e2e tests
* Modify setup script to update packages
* Revert gqlalchemy to 1.3.3
* Revert gqlalchemy to 1.3.3
* Address PR review comments
* Ommit semicolon
---------

Co-authored-by: antoniofilipovic <filipovicantonio1998@gmail.com>
Co-authored-by: János Benjamin Antal <benjamin.antal@memgraph.io>
2023-06-21 11:13:40 +02:00
Josipmrden
df95775222
Fix init file startup in community edition (#974)
* Fix init file startup in community edition

* Add possibility to build binary without MG_ENTERPRISE

* Added trace spdlog for when init file is not present

* Add gqlalchemy and unit tests

* Add init data files which correspond to the right directory by the github actions
2023-06-20 17:54:50 +02:00
Josipmrden
eb22edfd35
Add any type to C++ mgp wrapper 2023-06-20 09:33:14 +02:00
Marko Budiselić
1b85d77e9e
Add cypherl transform scripts under import/ (#773)
* Add `mglogs2cypherl.sh`
* Add `n2mg_cypherl.sh`
2023-06-18 11:57:46 +02:00
Marko Budiselić
cf1a86ed13
Refactor tests/integration/run.sh (#1016) 2023-06-15 23:10:52 +02:00
Marko Budiselić
7fb3f62703
Upgrade to RocksDB 8.1.1 (#1013) 2023-06-15 11:54:24 +02:00
Marko Budiselić
cb4b71bdbd
Update pull_request_template.md 2023-06-14 16:04:35 +02:00
andrejtonev
30ec570bb9
Add Bolt v5 support (#938) 2023-06-12 18:55:15 +02:00
Antonio Filipovic
d917c3f0fd
Fix slow IN LIST evaluation (#901) 2023-05-29 17:52:20 +02:00
andrejtonev
d842adbed3
Handle user-defined metadata and expose it with SHOW TRANSACTIONS(#945) 2023-05-29 11:40:14 +02:00
Bruno Sačarić
cdfcbc106c
Update license date (#941) 2023-05-18 11:42:12 +02:00
Josipmrden
651b6f3a5a Expose system metrics over HTTP Endpoint (#940) 2023-05-18 05:10:57 +00:00
Ante Pušić
0d9bd74a8a
Add support for map projection (#892) 2023-05-16 20:05:35 +02:00
andrejtonev
802f8aceda
Add data directory status and (un)lock query (#933) 2023-05-16 18:36:04 +02:00
gvolfing
7ddce539fa
Add return build type command (#894) 2023-05-16 16:02:03 +02:00
gvolfing
c3e4f81026
Include additional info inside storage mode info query (#883) 2023-05-16 14:25:41 +02:00
Antonio Filipovic
208705f296
Reduce memory consumption on return from python procedures (#932) 2023-05-16 10:33:09 +02:00
Ante Javor
69634a5354
Fix typo in mgbench 2023-05-10 14:02:46 +02:00
Aidar Samerkhanov
b8f282468d
Update pulsar client for e2e tests 2023-05-09 12:23:28 +02:00
Ante Javor
ab38161cd2
FIix methodology links (#903)
Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>
2023-05-03 16:37:36 +02:00
János Benjamin Antal
3a5f140c2b
Order chunks in utils::Pool to speed up deallocation (#898) 2023-05-02 13:08:20 +02:00
Ante Javor
eead0f79fc
Fix missing argument in daily benchmark (#907) 2023-05-02 11:00:44 +02:00
Antonio Filipovic
91017b7f36
Update profile query to use PoolResource for LOAD CSV (#885) 2023-04-26 18:04:13 +02:00
gvolfing
00f8d54249
Parallelize index creation (#882) 2023-04-26 16:28:02 +02:00
János Benjamin Antal
4fcdd52f88
Use correct memory resource (#900) 2023-04-26 10:02:55 +02:00
János Benjamin Antal
6c947947eb Parallelize recovery (#868)
* Parallelize edge recovery

* Load vertex labels and properties parallel

* Add parallel connectivity loading

* Add batches information to snapshot

* Introduce `items_per_batch` and `recovery_thread_count` flags

* Make possible to load snapshots with old version

* Add vertex batches to `RecoveryInfo`

* Extend durability integration tests with v15 test cases

* Add `std::vector` based `InitProperties`

* Use `InitProperties` in snapshot loading
2023-04-25 16:25:25 +02:00
Ante Javor
64fd281b2e
Update benchgraph methodology (#899) 2023-04-25 09:45:25 +02:00
János Benjamin Antal
97e250129e
Change AccumulateCursor to use utils::pmr::deque (#888)
* Increase performance by eliminating unnecessary `TypedValue` copies
2023-04-24 16:22:22 +02:00
Marko Budiselić
b02b201129
Improve Jepsen setup (#893) 2023-04-23 16:16:49 +02:00
Antonio Filipovic
2c6a55775d
Fix max block size bug on LOAD CSV(#877) 2023-04-19 16:10:20 +02:00
Ante Javor
940bf6722c
Add mgbench tutorial (#836)
* Add Docker runner
* Add Docker client
* Add benchgraph.sh script
* Add package script
2023-04-19 08:21:55 +02:00
Bruno Sačarić
49b5343238
Update license year (#867) 2023-04-05 11:06:32 +02:00
Andi
26a0866938
Fix index optimization bug (#860) 2023-04-04 23:43:13 +02:00
Bruno Sačarić
6545283dac
Add upload to S3 job to Package All workflow (#845)
* add to package_all

* add separate workflow

* make reduced jobs version for testing

* typo

* exclude amzn-2 in init script, because isort 5.12. fails to install

* change dir name

* move env var for release version

* bugfix

* Revert "make reduced jobs version for testing"

This reverts commit 7bb75f34a4.

* remove releaes folder

* extend timeout for arm builds

* increase timeout limit to dangerous levels

* revert timeouts, fix upload naming

* remove untested workflow
2023-04-04 21:54:25 +02:00
Bruno Sačarić
69c735934c
Update docker credentials (#853) 2023-04-04 21:47:04 +02:00
Antonio Filipovic
64e837b355
Introduce analytics mode (#772) 2023-04-04 18:46:26 +02:00
Antonio Filipovic
a586f2f98d
Change EvalContext and QueryExecution to use PoolResource on LOAD CSV (#825)
* Change PullPlan to use specific PoolResource for LOAD CSV
2023-04-04 16:54:08 +02:00
Josipmrden
9fc51f74a0
Skip label based auth on user with global visibility on graph (#837) 2023-04-04 11:13:25 +02:00
Josipmrden
128771a6ec
Add SHA-256 password encryption (#839) 2023-04-03 16:29:21 +02:00
Josipmrden
f5a49ed29f Add Cypher subqueries (#794) (#851)
Co-authored-by: Bruno Sačarić <bruno.sacaric@gmail.com>
2023-03-31 13:49:10 +00:00
Josipmrden
398503da7a
Add index statistics for better query planning (#812) 2023-03-30 15:34:34 +02:00
Bruno Sačarić
0819b40202
Fix bug on AllShortest with multiple edges between nodes (#832) 2023-03-29 16:39:41 +02:00
Andi
029be10f1d
Add queries to show or terminate active transactions (#790) 2023-03-27 15:46:00 +02:00
Aidar Samerkhanov
a9dc344b49
Add automatic CPU architecture detection to CMake (#838) 2023-03-27 11:26:10 +02:00
Marko Budiselić
8b0dca9eab
Upgrade pre-commit hook to use isort 5.12 (#840) 2023-03-26 17:34:51 +02:00
Ante Javor
cb813c3070
Add bigger LDBC dataset to mgbench (#747) 2023-03-21 21:44:11 +01:00
Ante Javor
6349fc9501
Add time-depended execution to the mgbench client (#805) 2023-03-18 20:18:58 +01:00
Jure Bajic
c4167bafdd
Add support for Amazon Linux 2 and stop generating C++ using Lisp/LCP (#814) 2023-03-14 19:24:55 +01:00
Bruno Sačarić
6f51141148
Update license year(#821) 2023-03-08 10:46:21 +01:00
Ante Pušić
97d45ab1d8
Add Python query module API mock (#757) 2023-03-07 15:41:19 +01:00
Josipmrden
6abd356d01
[master < E214] WHERE Exists feature (#818)
Add WHERE exists() to filter based on neighbouring pattern expressions
2023-03-07 00:28:41 +01:00
Vlasta
99a6c72bba
Change message on incompatible epoch_id error (#786) 2023-03-06 20:01:02 +01:00
Ante Pušić
173f5430aa
Remove noexcept from functions that may throw (#819) 2023-03-06 17:34:34 +01:00
Jure Bajic
362dc95e27
Add support for Ubuntu 22.04 ARM (#810) 2023-03-01 18:44:56 +01:00
Kruno Golubic
eead24a562
Add Lima badge to README (#802) 2023-02-24 17:08:07 +01:00
Antonio Filipovic
d79dd69607
Improve performance with props init on node|edge creation (#788) 2023-02-24 15:40:35 +01:00
Katarina Supe
024bf0c578
Merge pull request #806 from memgraph/vpavicic-patch-1
Update README.md
2023-02-23 15:06:40 +01:00
Vlasta
d7443a5558
Update README.md 2023-02-23 15:05:03 +01:00
Kruno Golubic
2df357b012
Add RedHat badge to README (#796) 2023-02-20 18:18:29 +01:00
Kruno Golubic
b2b5a6e2a0
Add Fedora badge to README (#795) 2023-02-20 17:59:57 +01:00
Ante Javor
5e2ee6c817
Improve mgbench C++ client (#760) 2023-02-17 17:54:05 +01:00
Antonio Filipovic
862a1afdf1
Improve Visit performance (#774) 2023-02-17 13:09:25 +01:00
Antonio Filipovic
bbce21e78f
Update pull request template (#775) 2023-02-17 11:50:17 +01:00
Jure Bajic
15c8662023
Add support for Fedora 36 (#787) 2023-02-17 10:47:36 +01:00
Katarina Supe
beaba0fc16
Update README with Cloud, Lab and import info (#768) 2023-02-07 15:43:41 +01:00
Josipmrden
8f70c5f2a5
Fix label-based auth using OLD view instead of NEW when merging nodes (#755) 2023-02-01 13:20:26 +01:00
Bruno Sačarić
14c651d3ba
edit workflows (#756) 2023-01-31 23:20:49 +01:00
Andi
04efc7a4a6
Remove torch and igraph from sys cache (#720) 2023-01-31 13:11:59 +01:00
Bruno Sačarić
19832b5838
Merge pull request #749 from memgraph/update-license-year-2.5.2
[master < MG] Update license year
2023-01-26 13:08:47 +01:00
Bruno Sačarić
e63fae2d5b
Update license year 2023-01-26 13:02:33 +01:00
Bruno Sačarić
34dd47ef07
Fix nested FOREACH shadowing bug (#725) 2023-01-25 20:06:05 +01:00
Jure Bajic
d3f275b231
Exclude license dir from CI trigger (#740) 2023-01-25 18:49:02 +01:00
Ante Pušić
aad4bcb7a0
Fix C++ API memory leak on Relationships() (#743) 2023-01-25 17:23:46 +01:00
Bruno Sačarić
034b54cb72
Fix bug on all shortest paths with an upper bound (#737) 2023-01-25 15:32:00 +01:00
Josipmrden
8cf51d9f68
Fix bug in query plan to use indexes on optional match and foreach (#736)
* Add fix in query plan to use indexes on optional match and foreach
2023-01-25 12:53:33 +01:00
Antonio Filipovic
1cd1da84fd
Fix bug on (vertex|edge) properties in C++ API (#732) 2023-01-23 12:57:17 +01:00
Jure Bajic
128a6cd522
Update license year (#739) 2023-01-19 12:31:59 +01:00
niko4299
d9eeedb9ee
Adding qid in bolt (#721) 2023-01-18 16:33:03 +01:00
Andi
156e2cd095
On delete triggers invalid edge reference (#717)
* Added check if there is invalid reference to the underlying edge

* Added fix and e2e tests

* Isolation levels tracking based on from_vertex_

* Added explicit transaction test + edge accessor changes based on the vertex_edge

* Autocommit on tests, initialize deleted by checking out_edges

Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2023-01-18 15:05:10 +01:00
Ante Javor
8b834c702c
Update mgbench to run Diff workflow under 30mins (#730) 2023-01-14 16:11:49 +01:00
Katarina Supe
eda5213d95
Release pypi mgp 1.1.1 (#727) 2022-12-24 09:33:53 +01:00
Bruno Sačarić
1f2a15e7c8
Fix MATCH not allowed on replica (#709) 2022-12-23 14:47:12 +01:00
Ante Javor
d72e7fa38d
Fix mgp.py create edge type hint and comment (#724) 2022-12-23 10:08:52 +01:00
Antonio Filipovic
e5e37bc14a
Fix LOAD CSV large memory usage (#712) 2022-12-22 19:38:48 +01:00
Katarina Supe
3ee068bbf9
Update build badge (#722) 2022-12-20 21:56:05 +01:00
Jure Bajic
68e846b182
Update license year (#711) 2022-12-13 13:54:09 +01:00
Andi
310e305cfb
Fix python module reloading (#706) 2022-12-12 21:11:13 +01:00
Marko Budiselić
9d6a23b6bd
Add init-file and init-data-file capabilities (#696) 2022-12-09 18:50:33 +01:00
Andi
f2d5ab61c4
Fix Python submodules reloading (#653) 2022-12-09 14:30:41 +01:00
Andi
0f77c85824
Fix cursor exhaustion by adding EmptyResult operator (#667) 2022-12-09 11:44:07 +01:00
niko4299
d6d4153fb7
Fix graph projection bug (#697) 2022-12-08 13:45:20 +01:00
Tyler Neely
7d6a5e5b9c
Add support for -h to show help in addition to --help (#682) 2022-12-07 16:51:32 +01:00
Vlasta
c529d52664
Add community links to the README (#693) 2022-12-07 12:42:48 +01:00
Ante Pušić
45451bae3b
Fix C++ query modules API bugs (#688) 2022-12-06 16:57:50 +01:00
niko4299
3e11f38548
Add aggregation distinct (#654) (#665) 2022-12-03 13:48:44 +02:00
Jure Bajic
6e4047a847
Bump mgconsole version (#660) 2022-12-01 13:10:08 +01:00
Ante Javor
8febdc12fb
Update tests/mgbench README (#679) 2022-11-30 12:43:57 +01:00
Kruno Golubic
3f23a10f44
Update README with one new paragraph and link (#675) 2022-11-29 14:24:02 +01:00
Ante Javor
11300960de
Add mixed workload and Neo4j client to mgbench (#566)
* Fix bolt bug inside the C++ client
* Add tail latency stats
* Add hot run option
* Add query caching
* Add jcmd memory tracking
2022-11-28 08:47:22 +01:00
Jure Bajic
1d5f387ddd
Add python check (#643) 2022-11-09 11:48:34 +02:00
Marko Budiselić
c4c3a254bf
Remove tools/check-build-system (#642) 2022-11-07 18:54:22 +01:00
Jure Bajic
b4beb0fc86
Update license for release 2.4.2 (#641) 2022-11-07 13:07:20 +01:00
Bruno Sačarić
58e6097664
Fix ALLSHORTEST combined with id function (#636) 2022-11-04 19:36:03 +01:00
Jure Bajic
ff21c0705c
Add multiple license support (#618)
Make license info available through LicenseChecker
Add LicenseInfoSender
Move license library from utils
Rename telemetry_lib to mg-telemetry
2022-11-04 15:23:43 +01:00
Katarina Supe
2a2b99b02a
Release mgp 1.1.0 (#620)
- Updated its README, authors, and published it with the added
   AuthorizationError that was a part of Memgraph 2.4.0 release.
 - Added missing SOURCE_TYPE_KAFKA and SOURCE_TYPE_PULSAR variables in _mgp.
2022-11-02 14:14:12 +01:00
Antonio Filipovic
3daab6ce97
Fix bug in the C API in-edges iterator (#613) 2022-10-25 19:18:44 +02:00
Marko Budiselić
fbd7274c95
Add Fedora 36 as an OS (#599) 2022-10-21 15:22:50 +02:00
Marko Budiselić
6efc84f022
Add libc++ option to the toolchain (#567) 2022-10-20 07:52:59 +02:00
Jure Bajic
287c2e94d1
Update license date (#586) 2022-10-07 14:42:52 +02:00
Antonio Filipovic
417cf4b30b
Fix bug related to EdgeType and Label getters in query modules (#582)
Co-authored-by: Kostas Kyrimis <kostaskyrim@gmail.com>
2022-10-06 21:21:11 +02:00
Jure Bajic
68e7fd3d36
Fix architecture check (#583) 2022-10-06 15:55:23 +02:00
Bruno Sačarić
5261d82063
Fix passing user's fine_grained_access_handler instead of role's (#579)
Co-authored-by: Jure Bajic <jure.bajic@memgraph.com>
2022-09-30 18:27:47 +02:00
Jure Bajic
9eb87bcf3e
Update release process (#564)
* Fix centos 7 virtualenv issue
* Fix ubuntu release issue
* Fix architecture check
2022-09-20 18:42:15 +02:00
Jure Bajic
a9491d3e68
Update license date (#561) 2022-09-20 14:23:24 +02:00
Marko Budiselić
b42e47b0be
Reduce the size of TypedValue (#560)
- Reduce the size of TypedValue
- Fix double allocation
- Add `Graph` to `TypedValue` unit tests
- Fix allocator usage in `TypedValue`
- Add graph projection to `long_running.cpp` stress test
2022-09-20 14:21:34 +02:00
Marko Budiselić
898c894a48
Merge pull request #484 from memgraph/E129-MG-label-based-authorization
* Add label-based authorization

Co-authored-by: Boris Taševski <boris.tasevski@memgraph.com>
Co-authored-by: Josip Mrden <josip.mrden@memgraph.com>
Co-authored-by: Niko Krvavica <niko.krvavica@memgraph.com>
Co-authored-by: Bruno Sacaric <bruno.sacaric@memgraph.com>
2022-09-16 15:12:54 +02:00
niko4299
a3c2492672
Add fine grained access control to mgbench (#522) 2022-09-15 21:33:15 +02:00
Boris Taševski
a0b8871b36
Fix cland tidy errors and other warning (#555) 2022-09-15 15:51:35 +02:00
Marko Budiselic
bb6cf35441 Merge master cpp module API 2022-09-15 11:29:52 +02:00
Ante Pušić
5bc301d21d
Add C++ query modules API (#546)
Co-authored-by: Ante Pusic <ante.pusic@memgraph.com>
Co-authored-by: Josip Mrden <josip.mrden@memgraph.com>
2022-09-15 11:26:26 +02:00
Marko Budiselic
1b89e679df Merge master 2022-09-15 08:32:41 +02:00
Boris Taševski
43e0520bc8
Merge master (#554) 2022-09-15 07:25:36 +02:00
Bruno Sačarić
2c8e45e889
Add run_id to the query summary (#548) 2022-09-14 20:21:06 +02:00
Boris Taševski
0876a8848d
Merge master to epic and fix differences (#552) 2022-09-14 18:36:21 +02:00
Boris Taševski
fb4641a6be
Fix logic in fine grained permissions (#551) 2022-09-14 12:39:23 +02:00
niko4299
201f75e809
Add MG_ENTERPRISE and license checks (#547) 2022-09-14 01:10:28 +02:00
niko4299
dc8dad9794
Add authorization in SetLabels, RemoveLabels, Allshortestpath cursor (#537) 2022-09-13 17:14:23 +02:00
Boris Taševski
aa02745915
[E129-MG < T1030-MG] Tech debts (#540)
* renamed parameters (#539)

* added variable declarations in ifs; minor code improvements; (#541)

* dba parameter removed (#543)

* Accept -> Has rename; HasGlobalPermissionOnVertices/Edges -> HasGlobalPrivilegeOnVertices/Edges (#545)

* replaced passing dba from reference to pointer
2022-09-13 11:37:17 +02:00
Josipmrden
b2d5a8eeca
[E129-MG < T1040-MG] Add exceptions in LBA cursors (#536)
Exceptions added in update and create delete operators instead of logging
2022-09-12 14:04:40 +02:00
Boris Taševski
c09b175c76
[E129-MG < T1006-MG] Expand C API with LBA checks (#527)
* [T1006-MG < T1017-MG] Add LBA checks to all read procedures in C API (#515)

* Initial Impl

* NextPermittedEdge introduced

* revert moving constructor to cpp

* edge from and edge to methods expanded with lba check

* minor fix

* added check to path expand procedure

* Added integration tests for read query procedures

* additional check

* changed iterator type to reference

* comments from pr

Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>

* [T1006-MG < T1018-MG] Add LBA checks to all update procedures in C API (#516)

* Initial Impl

* NextPermittedEdge introduced

* revert moving constructor to cpp

* edge from and edge to methods expanded with lba check

* minor fix

* extended update methods

* added check to path expand procedure

* Added integration tests for read query procedures

* Added integration tests for update query modules

* additional check

* changed iterator type to reference

* fixed bug in Update property for node; fixed 2 e2e tests

* replaced enum

Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>

* [T1006-MG < T1019-MG] Add LBA checks to all Create and Delete procedures in C API (#517)

* Initial Impl

* NextPermittedEdge introduced

* revert moving constructor to cpp

* edge from and edge to methods expanded with lba check

* minor fix

* extended update methods

* initial implementation

* added check to path expand procedure

* Added integration tests for read query procedures

* Added integration tests for update query modules

* Added unit tests for creation of vertex, adding and removing vertex label

* additional check

* changed iterator type to reference

* Added unit tests for create edge

* Corrected query module in create edge

* fixed bug in Update property for node; fixed 2 e2e tests

* fixed merge errors

* Expanded FineGrainedAuthChecker with HasGlobalPermissionOnVertices and HasGlobalPermissionOnEdges

* Removed two wrong checks; Added two global checks

* return null added

* introduced new mgp_error value

* fixed endless loop

* replaced enum

* intermediate

* tests updated

* PermissionDeniedError -> AuthorizationError rename

* rename in enum permission_denied error -> authorization error

* mgp_vertex_remove_label check improved

* quotes changed; order of imports fixed

* string constant introduced

* import fixed

* yaml format

Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>

Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>
2022-09-08 17:48:34 +02:00
Kostas Kyrimis
f1fe77adfb
Graph project feature implementation (#508) (#535) 2022-09-07 16:00:49 +03:00
Josip Mrden
35f8978560 Merge branch 'master' into E129-MG-label-based-authorization 2022-09-07 09:28:32 +02:00
Josip Matak
9e8fb2516b
Add all shortest path algorithm (#409) 2022-09-06 16:21:32 +02:00
Josip Mrden
0a66feccff Merge branch 'master' into E129-MG-label-based-authorization 2022-09-06 11:14:27 +02:00
Boris Taševski
d008a2ad8d
[E129-MG < T1007-MG] Expand Cursors with LBA checks (#524)
* [T1007-MG < T0997-MG] Authorization on paths (#501)

* Added read authorization in paths operators

* [T1007-MG < T1016-MG] Added authorization in create and delete operators (#513)

* Added authorization in RemoveNodeCursor, RemoveExpandCursor, CreateNodeCursor, CreateExpandCursor,MergeCursor

* [T1007-MG < T1014-MG] Add authorization to read operators (#520)

Added label based access control to read operators (ScanAll).

* [T1007-MG < T1015-MG] Add authorization to update operators (SetProperty, SetProperties, RemoveProperty) (#521)

Added label based authorization to update operators

Co-authored-by: niko4299 <51059248+niko4299@users.noreply.github.com>
Co-authored-by: Josip Mrden <josip.mrden@memgraph.io>
2022-09-02 17:12:07 +02:00
Josipmrden
7478300762
[E129-MG < T997-MG] Show label privileges (#506)
Added showing of label privileges functionality to fine grained access control.
2022-08-31 12:14:16 +02:00
János Benjamin Antal
0bc298c3ad
Fix handling of the ROUTE Bolt message (#475)
The fields of ROUTE message were not read from the input buffer, thus the
input buffer got corrupted. Sending a new message to the server would result
reading the remaining fields from the buffer, which means reading some values
instead of message signature. Because of this unmet expectation, Memgraph closed
the connection. With this fix, the fields of the ROUTE message are properly
read and ignored.
2022-08-26 13:19:27 +02:00
Boris Taševski
05f120b7d4
[E129-MG < T1004-MG] Expand cypher with more granular label permissions (#500)
* Added enum for more granular access control; Expanded functionality of fine grained access checker; Propagated changes to Edit, Deny and Revoke permissions methods in interpreter

* Introduced Merge method for merging two colle with permissions

* e2e tests implementation started

* Expanded cypher to support fine grained permissions

* ast.lcp::AuthQuery removed labels, added support for label permissions

* promoted label permissions to vector

* removed unnecesary enum value

* expanded glue/auth with LabelPrivilegeToLabelPermission

* added const

* extended Grant Deny and Revoke Privileges with new label privileges

* extended Edit Grant Deny and Revoke Privileges to properly use new model

* Fixed unit tests

* FineGrainedAccessChecker Grant and Deny methods reworked

* Revoke cypher slightly reworked; Revoke for labels works without label permissions

* EditPermission's label_permission lambda now takes two parameters

* constants naming enforced; replaced asterisks with string constant

* removed faulty test addition

* Naming fixes; FineGrainedAccessChecker unit tests introduced

* unnecessary includes removed; minor code improvements

* minor fix

* Access checker reworked; denies and grant merged into single permission object; Created global_permission that applies to all non-created permissions. Grant, Deny and Revoke reworked; Merge method reworked

* Fixed wrong check;

* Fix after merge; renamed constants; removed unused constant

* Fix after merge; workloads.yaml for lbaprocedures e2e tests updated with new grammar

* Fixes after merge

* Fixes after merge

* fixed Revoke that was not fixed after the merge

* updated cypher main visitor tests

* PR review changes; Naming and const fixed, replaced double tertiary with lambda

* unwrapping the iterator fix

* merge 1003 minor fix

* minor spelling fixes

* Introduced visitPrivilegesList because of the doubled code

* const added

* string const to enum

* redundant braces

* added const

* minor code improvement

* e2e tests expanded

* if -> switch

* enum class inherits uint8_t now

* LabelPrililege::EDIT -> LabelPrivilege::UPDATE

* LabelPermission -> EntityPermission; LabelPrivilege -> EntityPrivilege

* EntityPrivilege -> FineGrainedPrivilege; EntityPermission -> FineGrainedPermission
2022-08-22 14:11:43 +02:00
antoniofilipovic
d73d153978
Add logging API (#417) 2022-08-22 14:47:52 +03:00
Boris Taševski
b489ac7cff
[E129-MG < T1003-MG] Expand fine grained access checker with more granular permissions (#496)
* Added enum for more granular access control; Expanded functionality of fine grained access checker; Propagated changes to Edit, Deny and Revoke permissions methods in interpreter

* Introduced Merge method for merging two colle with permissions

* e2e tests implementation started

* FineGrainedAccessChecker Grant and Deny methods reworked

* removed faulty test addition

* Naming fixes; FineGrainedAccessChecker unit tests introduced

* unnecessary includes removed; minor code improvements

* Access checker reworked; denies and grant merged into single permission object; Created global_permission that applies to all non-created permissions. Grant, Deny and Revoke reworked; Merge method reworked

* Fixed wrong check;

* PR review changes; Naming and const fixed, replaced double tertiary with lambda

* unwrapping the iterator fix

* minor spelling fixes
2022-08-18 16:59:38 +02:00
niko4299
e15576f56c
[E129-MG <-T0982-MG] implement edge type filtering (#489)
* GRANT, REVOKE, DENY and access_checker DONE

* Added AccessChecker to ExecutionContext

* grammar expanded; (#462)

* current

* T0954 mg expand user and role to hold permissions on labels (#465)

* added FineGrainedAccessPermissions class to model

* expanded user and role with fine grained access permissions

* fixed grammar

* [E129 < T0953-MG] GRANT, DENY, REVOKE added in interpreter and mainVisitor (#464)

* GRANT, DENY, REVOKE added in interpreter and mainVisitor

* Commented labelPermissons

* remove labelsPermission adding

* Fixed

* Removed extra lambda

* fixed

* [E129<-T0955-MG] Expand ExecutionContext with label related information (#467)

* added

* Added FineGrainedAccessChecker to Context

* fixed

* Added filtering

* testing

* Added edge filtering to storage, need to add filtering in simple Expand in operator.cpp

* Removed storage changes

* MATCH filtering working

* EdgeTypeFiltering working, just need to test everything again

* Removed FineGrainedAccessChecker

* Removed Expand Path

* Fix

* Tested FineGrainedAccessHandler, need to test AuthChecker

* Added integration test for lba

* Fixed merge conflicts

* PR fix

* fixed

* PR fix

* Fix test

* removed .vscode, .cache, .githooks

* githooks

* added tests

* fixed build

* Changed ast.lcp and User pointer to value in context.hpp

* Fixed test

* Remove denies on grant all

* AuthChecker

* Pr fix, auth_checker still not fixed

* Create mg-glue and extract UserBasedAuthChecker from AuthChecker

* Build fixed, need to fix test

* e2e tests

* e2e test working

* Added unit test, e2e and FineGrainedChecker

* Mege E129, auth_checker tests

* Fixed test

* e2e fix

Co-authored-by: Boris Taševski <36607228+BorisTasevski@users.noreply.github.com>
Co-authored-by: josipmrden <josip.mrden@external-basf.com>
Co-authored-by: János Benjamin Antal <benjamin.antal@memgraph.io>
2022-08-16 15:57:23 +02:00
Boris Taševski
a98463b0bd
[E129 < T0996] C-API: Implement using Fine Grained Access Checker in iterator over vertices (#494)
* implemented skipping vertices in Constructor and mgp_vertices_iterator_next

* Added utility function for moving iterator to next permitted vertex

* removed ifdef directive

* NextPermitted parameter type changed from mgp_vertices_iterator* to mgp_vertices_iterator&

* created support for lba-procedures e2e testing; Added test for vertex iterator skipping unauthorized vertices

* removed fixture from tests; converted generator to regular function;
2022-08-12 19:34:47 +02:00
Kruno Golubic
705631a35d
Create README file for CSV Import Tools (#493)
Co-authored-by: Marko Budiselić <marko.budiselic@memgraph.com>
2022-08-11 16:10:36 +02:00
Jeremy B
d4f0bb0e38
Correct inconsistencies w.r.t. sync replication (#435)
Add a report for the case where a sync replica does not confirm within a timeout:
-Add a new exception: ReplicationException to be returned when one sync replica does not confirm the reception of messages (new data, new constraint/index, or for triggers)
-Update the logic to throw the ReplicationException when needed for insertion of new data, triggers, or creation of new constraint/index
-Add end-to-end tests to cover the loss of connection with sync/async replicas when adding new data, adding new constraint/indexes, and triggers

Add end-to-end tests to cover the creation and drop of indexes, existence constraints, and uniqueness constraints

Improved tooling function mg_sleep_and_assert to also show the last result when duration is exceeded
2022-08-09 11:29:55 +02:00
Jure Bajic
531db2d47c
Fix WebSocket test (#485)
* Fix websocket unit tests hanging
* Rename websocket to monitoring unit test
2022-08-08 14:49:48 +02:00
Boris Taševski
116262d9a0
[E129 < T0956] Filtering nodes in ScanAll cursor [Niko] (#492)
* implemented scanall filtering

* minor code refactor

* FindNextNode -> FindNextVertex
2022-08-04 19:20:17 +02:00
gvolfing
bbfef45b37
Add command to return startup config (#459)
Add a new command that is able to return the set of configurations that that the
given instance of memgraph was started up with. The returned information
currently consists of the name, the default and the current value of each flag.
The hidden property of three flags were removed, namely --query-cost-planner,
--query-vertex-count-to-expand-existing and --query-max-plans. The flag
--log-link-basename was completely removed since it is not used.
2022-08-03 18:08:44 +02:00
János Benjamin Antal
05b00edfd4
Declare mgp_func_context outside the callback function (#481) 2022-08-03 15:15:53 +02:00
Boris Taševski
480df4ed69
Merge old Label Based Auth Epic branch into new one because of commits with bad checks on the old epic branch (#478)
* grammar expanded; (#462)

* T0954 mg expand user and role to hold permissions on labels (#465)

* added FineGrainedAccessPermissions class to model

* expanded user and role with fine grained access permissions

* fixed grammar

* [E129 < T0953-MG] GRANT, DENY, REVOKE added in interpreter and mainVisitor (#464)

* GRANT, DENY, REVOKE added in interpreter and mainVisitor

* Commented labelPermissons

* remove labelsPermission adding

* Removed extra lambda

* [E129<-T0955-MG] Expand ExecutionContext with label related information (#467)

* Added FineGrainedAccessChecker to Context

* fixed failing tests for label based authorization (#480)

* Marked FineGrainedAccessChecker ctor explicit; Introduced change to clang-tidy; (#483)

Co-authored-by: niko4299 <51059248+niko4299@users.noreply.github.com>
2022-08-02 12:51:22 +02:00
1459 changed files with 177761 additions and 29752 deletions

View File

@ -1,6 +1,7 @@
---
BasedOnStyle: Google
---
Language: Cpp
BasedOnStyle: Google
Standard: "c++20"
UseTab: Never
DerivePointerAlignment: false

View File

@ -6,6 +6,7 @@ Checks: '*,
-altera-unroll-loops,
-android-*,
-cert-err58-cpp,
-cppcoreguidelines-avoid-do-while,
-cppcoreguidelines-avoid-c-arrays,
-cppcoreguidelines-avoid-goto,
-cppcoreguidelines-avoid-magic-numbers,
@ -60,10 +61,11 @@ Checks: '*,
-readability-implicit-bool-conversion,
-readability-magic-numbers,
-readability-named-parameter,
-readability-identifier-length,
-misc-no-recursion,
-concurrency-mt-unsafe,
-bugprone-easily-swappable-parameters'
-bugprone-easily-swappable-parameters,
-bugprone-unchecked-optional-access'
WarningsAsErrors: ''
HeaderFilterRegex: 'src/.*'
AnalyzeTemporaryDtors: false

View File

@ -33,4 +33,4 @@ for file in $modified_files; do
fi
done;
return ${FAIL}
exit ${FAIL}

View File

@ -1,19 +1,17 @@
---
name: Bug report
about: Create a report to help us improve
title: "[BUG] "
title: ""
labels: bug
assignees: gitbuda, antonio2368
---
**Memgraph version**
Which version did you use?
**Environment**
Some information about the environment you are using Memgraph on: operating
system, how do you connect, with or without docker, which driver etc.
system, architecture (ARM, x86), how do you connect, with or without docker,
which driver etc.
**Describe the bug**
A clear and concise description of what the bug is.
@ -22,6 +20,7 @@ A clear and concise description of what the bug is.
Steps to reproduce the behavior:
1. Run the following query '...'
2. Click on '....'
3. ... IDEALLY: link to the workload info (DATASET & QUERIES) ...
**Expected behavior**
A clear and concise description of what you expected to happen.
@ -32,3 +31,11 @@ your problem.
**Additional context**
Add any other context about the problem here.
**Verification Environment**
Once we fix it, what do you need to verify the fix?
Do you need:
* Plain memgraph package -> for which Linux?
* Plain memgraph Docker image?
* Which architecture do you use ARM | x86?
* Full Memgraph platform?

View File

@ -1,11 +1,28 @@
### Description
Please briefly explain the changes you made here.
Please delete either the [master < EPIC] or [master < Task] part, depending on what are your needs.
[master < Epic] PR
- [ ] Check, and update documentation if necessary
- [ ] Update [changelog](https://docs.memgraph.com/memgraph/changelog)
- [ ] Write E2E tests
- [ ] Compare the [benchmarking results](https://bench-graph.memgraph.com/) between the master branch and the Epic branch
- [ ] Provide the full content or a guide for the final git message
- [FINAL GIT MESSAGE]
[master < Task] PR
- [ ] Check, and update documentation if necessary
- [ ] Update [changelog](https://docs.memgraph.com/memgraph/changelog)
- [ ] Provide the full content or a guide for the final git message
- **[FINAL GIT MESSAGE]**
### Documentation checklist
- [ ] Add the documentation label tag
- [ ] Add the bug / feature label tag
- [ ] Add the milestone for which this feature is intended
- If not known, set for a later milestone
- [ ] Write a release note, including added/changed clauses
- **[Release note text]**
- [ ] Link the documentation PR here
- **[Documentation PR link]**
- [ ] Tag someone from docs team in the comments

View File

@ -3,7 +3,7 @@ name: Daily Benchmark
on:
workflow_dispatch:
schedule:
- cron: "0 1 * * *"
- cron: "0 22 * * *"
jobs:
release_benchmarks:
@ -16,7 +16,7 @@ jobs:
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -59,7 +59,7 @@ jobs:
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "macro_benchmark" \
--benchmark-results-path "../../tests/macro_benchmark/.harness_summary" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
@ -67,7 +67,13 @@ jobs:
- name: Run mgbench
run: |
cd tests/mgbench
./benchmark.py --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_pokec.json pokec/medium/*/*
./benchmark.py vendor-native --num-workers-for-benchmark 1 --export-results benchmark_supernode.json supernode
./benchmark.py vendor-native --num-workers-for-benchmark 1 --export-results benchmark_high_write_set_property.json high_write_set_property
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results cartesian.json cartesian
- name: Upload mgbench results
run: |
@ -76,7 +82,25 @@ jobs:
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "mgbench" \
--benchmark-results-path "../../tests/mgbench/benchmark_result.json" \
--benchmark-results "../../tests/mgbench/benchmark_pokec.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "supernode" \
--benchmark-results "../../tests/mgbench/benchmark_supernode.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "high_write_set_property" \
--benchmark-results "../../tests/mgbench/benchmark_high_write_set_property.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "cartesian" \
--benchmark-results "../../tests/mgbench/cartesian.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"

View File

@ -14,106 +14,111 @@ on:
- "**/*.md"
- ".clang-format"
- "CODEOWNERS"
- "licenses/*"
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build community binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# Initialize dependencies.
./init
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --community
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
code_analysis:
name: "Code analysis"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
# This is also needed if we want do to comparison against other branches
# See https://github.community/t/checkout-code-fails-when-it-runs-lerna-run-test-since-master/17920
- name: Fetch all history for all tags and branches
run: git fetch
- name: Build combined ASAN, UBSAN and coverage binaries
- name: Initialize deps
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
cd build
cmake -DTEST_COVERAGE=ON -DASAN=ON -DUBSAN=ON ..
make -j$THREADS memgraph__unit
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests. It is restricted to 2 threads intentionally, because higher concurrency makes the timing related tests unstable.
cd build
LSAN_OPTIONS=suppressions=$PWD/../tools/lsan.supp UBSAN_OPTIONS=halt_on_error=1 ctest -R memgraph__unit --output-on-failure -j2
- name: Compute code coverage
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Compute code coverage.
cd tools/github
./coverage_convert
# Package code coverage.
cd generated
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v2
with:
name: "Code coverage"
path: tools/github/generated/code_coverage.tar.gz
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --init-only
- name: Set base branch
if: ${{ github.event_name == 'pull_request' }}
@ -125,128 +130,232 @@ jobs:
run: |
echo "BASE_BRANCH=origin/master" >> $GITHUB_ENV
- name: Python code analysis
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-analysis --base-branch "${{ env.BASE_BRANCH }}"
- name: Build combined ASAN, UBSAN and coverage binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph --coverage --asan --ubsan
- name: Run unit tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit-coverage
- name: Compute code coverage
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph code-coverage
- name: Save code coverage
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Code analysis)"
path: tools/github/generated/code_coverage.tar.gz
- name: Run clang-tidy
run: |
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph clang-tidy --base-branch "${{ env.BASE_BRANCH }}"
# Restrict clang-tidy results only to the modified parts
git diff -U0 ${{ env.BASE_BRANCH }}... -- src | ./tools/github/clang-tidy/clang-tidy-diff.py -p 1 -j $THREADS -path build | tee ./build/clang_tidy_output.txt
# Fail if any warning is reported
! cat ./build/clang_tidy_output.txt | ./tools/github/clang-tidy/grep_error_lines.sh > /dev/null
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Diff]
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Debug
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
- name: Spin up mgbuild container
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run leftover CTest tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run leftover CTest tests (all except unit and benchmark tests).
cd build
ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure
- name: Run drivers tests
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
cd tests/integration
for name in *; do
if [ ! -d $name ]; then continue; fi
pushd $name >/dev/null
echo "Running: $name"
if [ -x prepare.sh ]; then
./prepare.sh
fi
if [ -x runner.py ]; then
./runner.py
elif [ -x runner.sh ]; then
./runner.sh
fi
echo
popd >/dev/null
done
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run cppcheck and clang-format.
cd tools/github
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v2
with:
name: "Code coverage"
path: tools/github/cppcheck_and_clang_format.txt
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Diff]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v2
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
# Initialize dependencies.
./init
- name: Run leftover CTest tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph leftover-CTest
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
- name: Run drivers tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers
- name: Run HA driver tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph drivers-high-availability
- name: Run integration tests
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph integration
- name: Run cppcheck and clang-format
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph cppcheck-and-clang-format
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v4
with:
name: "Code coverage(Debug build)"
path: tools/github/cppcheck_and_clang_format.txt
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 100
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run GQL Behave tests
run: |
cd tests/gql_behave
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph gql-behave
- name: Save quality assurance status
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status"
path: |
@ -255,145 +364,241 @@ jobs:
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--threads $THREADS \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph unit
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure -j$THREADS
# This step will be skipped because the e2e stream tests have been disabled
# We need to fix this as soon as possible
- name: Ensure Kafka and Pulsar are up
if: false
run: |
cd tests/e2e/streams/kafka
docker-compose up -d
cd ../pulsar
docker-compose up -d
- name: Run e2e tests
run: |
# TODO(gitbuda): Setup mgclient and pymgclient properly.
cd tests
./setup.sh
source ve3/bin/activate
cd e2e
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:../../libs/mgclient/lib python runner.py --workloads-root-directory .
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph e2e
# Same as two steps prior
- name: Ensure Kafka and Pulsar are down
if: false
run: |
cd tests/e2e/streams/kafka
docker-compose down
cd ../pulsar
docker-compose down
- name: Run stress test (plain)
run: |
cd tests/stress
./continuous_integration
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-plain
- name: Run stress test (SSL)
run: |
cd tests/stress
./continuous_integration --use-ssl
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph stress-ssl
- name: Run durability test
run: |
cd tests/stress
source ve3/bin/activate
python3 durability --num-steps 5
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph durability
- name: Create enterprise DEB package
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
package-memgraph
cd build
# create mgconsole
# we use the -B to force the build
make -j$THREADS -B mgconsole
# Create enterprise DEB package.
mkdir output && cd output
cpack -G DEB --config ../CPackConfig.cmake
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --package
- name: Save enterprise DEB package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
path: build/output/memgraph*.deb
path: build/output/${{ env.OS }}/memgraph*.deb
- name: Copy build logs
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --build-logs
- name: Save test data
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
if: always()
with:
name: "Test data"
path: |
# multiple paths could be defined
build/logs
name: "Test data(Release build)"
path: build/logs
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_jepsen_test:
name: "Release Jepsen Test"
runs-on: [self-hosted, Linux, X64, Debian10, JepsenControl]
#continue-on-error: true
runs-on: [self-hosted, Linux, X64, DockerMgBuild]
timeout-minutes: 80
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-12
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: RelWithDebInfo
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
# Initialize dependencies.
./init
- name: Copy memgraph binary
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
copy --binary
# Build only memgraph release binarie.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS memgraph
- name: Refresh Jepsen Cluster
run: |
cd tests/jepsen
./run.sh cluster-refresh
- name: Run Jepsen tests
run: |
cd tests/jepsen
./run.sh test --binary ../../build/memgraph --run-args "test-all --node-configs resources/node-config.edn" --ignore-run-stdout-logs --ignore-run-stderr-logs
./run.sh test-all-individually --binary ../../build/memgraph --ignore-run-stdout-logs --ignore-run-stderr-logs
- name: Save Jepsen report
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: "Jepsen Report"
path: tests/jepsen/Jepsen.tar.gz
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove
release_benchmarks:
name: "Release benchmarks"
runs-on: [self-hosted, Linux, X64, Diff, Gen7]
runs-on: [self-hosted, Linux, X64, DockerMgBuild, Gen7]
timeout-minutes: 60
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
OS: debian-11
TOOLCHAIN: v5
ARCH: amd
BUILD_TYPE: Release
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Spin up mgbuild container
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
run
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--build-type $BUILD_TYPE \
--threads $THREADS \
build-memgraph
- name: Run macro benchmarks
run: |
cd tests/macro_benchmark
./harness QuerySuite MemgraphRunner \
--groups aggregation 1000_create unwind_create dense_expand match \
--no-strict
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph macro-benchmark
- name: Get branch name (merge)
if: github.event_name != 'pull_request'
@ -407,29 +612,49 @@ jobs:
- name: Upload macro benchmark results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "macro_benchmark" \
--benchmark-results-path "../../tests/macro_benchmark/.harness_summary" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "macro_benchmark" \
--benchmark-results "../../tests/macro_benchmark/.harness_summary" \
--github-run-id ${{ github.run_id }} \
--github-run-number ${{ github.run_number }} \
--head-branch-name ${{ env.BRANCH_NAME }}
- name: Run mgbench
run: |
cd tests/mgbench
./benchmark.py --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph mgbench
- name: Upload mgbench results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "mgbench" \
--benchmark-results-path "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
--enterprise-license $MEMGRAPH_ENTERPRISE_LICENSE \
--organization-name $MEMGRAPH_ORGANIZATION_NAME \
test-memgraph upload-to-bench-graph \
--benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
- name: Stop mgbuild container
if: always()
run: |
./release/package/mgbuild.sh \
--toolchain $TOOLCHAIN \
--os $OS \
--arch $ARCH \
stop --remove

View File

@ -14,7 +14,7 @@ jobs:
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)

View File

@ -1,178 +0,0 @@
name: Package All
# TODO(gitbuda): Cleanup docker container if GHA job was canceled.
on: workflow_dispatch
jobs:
centos-7:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-7
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: centos-7
path: build/output/centos-7/memgraph*.rpm
centos-9:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-9
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: centos-9
path: build/output/centos-9/memgraph*.rpm
debian-10:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
debian-11:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
docker:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
cd release/package
./run.sh package debian-11 --for-docker
./run.sh docker
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: docker
path: build/output/docker/memgraph*.tar.gz
ubuntu-1804:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-18.04
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: ubuntu-1804
path: build/output/ubuntu-18.04/memgraph*.deb
ubuntu-2004:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-20.04
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: ubuntu-2004
path: build/output/ubuntu-20.04/memgraph*.deb
ubuntu-2204:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: ubuntu-2204
path: build/output/ubuntu-22.04/memgraph*.deb
debian-11-platform:
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 --for-platform
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: debian-11-platform
path: build/output/debian-11/memgraph*.deb
debian-11-arm:
runs-on: [self-hosted, DockerMgBuild, ARM64, strange]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v3
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm
- name: "Upload package"
uses: actions/upload-artifact@v2
with:
name: debian-11-arm
path: build/output/debian-11-arm/memgraph*.deb

295
.github/workflows/package_memgraph.yaml vendored Normal file
View File

@ -0,0 +1,295 @@
name: Package memgraph
# TODO(gitbuda): Cleanup docker container if GHA job was canceled.
on:
workflow_dispatch:
inputs:
memgraph_version:
description: "Memgraph version to upload as. Leave this field empty if you don't want to upload binaries to S3. Format: 'X.Y.Z'"
required: false
build_type:
type: choice
description: "Memgraph Build type. Default value is Release"
default: 'Release'
options:
- Release
- RelWithDebInfo
target_os:
type: choice
description: "Target OS for which memgraph will be packaged. Select 'all' if you want to package for every listed OS. Default is Ubuntu 22.04"
default: 'ubuntu-22_04'
options:
- all
- amzn-2
- centos-7
- centos-9
- debian-10
- debian-11
- debian-11-arm
- debian-11-platform
- docker
- fedora-36
- ubuntu-18_04
- ubuntu-20_04
- ubuntu-22_04
- ubuntu-22_04-arm
jobs:
amzn-2:
if: ${{ github.event.inputs.target_os == 'amzn-2' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package amzn-2 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: amzn-2
path: build/output/amzn-2/memgraph*.rpm
centos-7:
if: ${{ github.event.inputs.target_os == 'centos-7' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-7 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: centos-7
path: build/output/centos-7/memgraph*.rpm
centos-9:
if: ${{ github.event.inputs.target_os == 'centos-9' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package centos-9 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: centos-9
path: build/output/centos-9/memgraph*.rpm
debian-10:
if: ${{ github.event.inputs.target_os == 'debian-10' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
debian-11:
if: ${{ github.event.inputs.target_os == 'debian-11' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
debian-11-arm:
if: ${{ github.event.inputs.target_os == 'debian-11-arm' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, ARM64, strange]
timeout-minutes: 120
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11-aarch64
path: build/output/debian-11-arm/memgraph*.deb
debian-11-platform:
if: ${{ github.event.inputs.target_os == 'debian-11-platform' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 ${{ github.event.inputs.build_type }} --for-platform
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11-platform
path: build/output/debian-11/memgraph*.deb
docker:
if: ${{ github.event.inputs.target_os == 'docker' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
cd release/package
./run.sh package debian-11 ${{ github.event.inputs.build_type }} --for-docker
./run.sh docker
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: docker
path: build/output/docker/memgraph*.tar.gz
fedora-36:
if: ${{ github.event.inputs.target_os == 'fedora-36' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package fedora-36 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: fedora-36
path: build/output/fedora-36/memgraph*.rpm
ubuntu-18_04:
if: ${{ github.event.inputs.target_os == 'ubuntu-18_04' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-18.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-18.04
path: build/output/ubuntu-18.04/memgraph*.deb
ubuntu-20_04:
if: ${{ github.event.inputs.target_os == 'ubuntu-20_04' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-20.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-20.04
path: build/output/ubuntu-20.04/memgraph*.deb
ubuntu-22_04:
if: ${{ github.event.inputs.target_os == 'ubuntu-22_04' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04 ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04
path: build/output/ubuntu-22.04/memgraph*.deb
ubuntu-22_04-arm:
if: ${{ github.event.inputs.target_os == 'ubuntu-22_04-arm' || github.event.inputs.target_os == 'all' }}
runs-on: [self-hosted, DockerMgBuild, ARM64, strange]
timeout-minutes: 120
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04-arm ${{ github.event.inputs.build_type }}
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/ubuntu-22.04-arm/memgraph*.deb
upload-to-s3:
# only run upload if we specified version. Allows for runs without upload
if: "${{ github.event.inputs.memgraph_version != '' }}"
needs: [amzn-2, centos-7, centos-9, debian-10, debian-11, debian-11-arm, debian-11-platform, docker, fedora-36, ubuntu-18_04, ubuntu-20_04, ubuntu-22_04, ubuntu-22_04-arm]
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v4
with:
# name: # if name input parameter is not provided, all artifacts are downloaded
# and put in directories named after each one.
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "download.memgraph.com"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph/v${{ github.event.inputs.memgraph_version }}/"

View File

@ -0,0 +1,85 @@
name: Run performance benchmarks manually
on:
workflow_dispatch:
jobs:
performance_benchmarks:
name: "Performance benchmarks"
runs-on: [self-hosted, Linux, X64, Diff, Gen7]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
- name: Get branch name (merge)
if: github.event_name != 'pull_request'
shell: bash
run: echo "BRANCH_NAME=$(echo ${GITHUB_REF#refs/heads/} | tr / -)" >> $GITHUB_ENV
- name: Get branch name (pull request)
if: github.event_name == 'pull_request'
shell: bash
run: echo "BRANCH_NAME=$(echo ${GITHUB_HEAD_REF} | tr / -)" >> $GITHUB_ENV
- name: Run benchmarks
run: |
cd tests/mgbench
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_result.json pokec/medium/*/*
./benchmark.py vendor-native --num-workers-for-benchmark 1 --export-results benchmark_supernode.json supernode
./benchmark.py vendor-native --num-workers-for-benchmark 1 --export-results benchmark_high_write_set_property.json high_write_set_property
./benchmark.py vendor-native --num-workers-for-benchmark 12 --export-results benchmark_cartesian.json cartesian
- name: Upload benchmark results
run: |
cd tools/bench-graph-client
virtualenv -p python3 ve3
source ve3/bin/activate
pip install -r requirements.txt
./main.py --benchmark-name "mgbench" \
--benchmark-results "../../tests/mgbench/benchmark_result.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "supernode" \
--benchmark-results "../../tests/mgbench/benchmark_supernode.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "high_write_set_property" \
--benchmark-results "../../tests/mgbench/benchmark_high_write_set_property.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"
./main.py --benchmark-name "cartesian" \
--benchmark-results "../../tests/mgbench/cartesian.json" \
--github-run-id "${{ github.run_id }}" \
--github-run-number "${{ github.run_number }}" \
--head-branch-name "${{ env.BRANCH_NAME }}"

View File

@ -0,0 +1,208 @@
name: Release build test
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
push:
branches:
- "release/**"
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
# UTC
- cron: "0 22 * * *"
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
Debian10:
uses: ./.github/workflows/release_debian10.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
Ubuntu20_04:
uses: ./.github/workflows/release_ubuntu2004.yaml
with:
build_type: ${{ github.event.inputs.build_type || 'Release' }}
secrets: inherit
PackageDebian10:
if: github.ref_type == 'tag'
needs: [Debian10]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-10 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-10
path: build/output/debian-10/memgraph*.deb
PackageUbuntu20_04:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04
path: build/output/ubuntu-22.04/memgraph*.deb
PackageUbuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package ubuntu-22.04-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/ubuntu-22.04-arm/memgraph*.deb
PushToS3Ubuntu20_04_ARM:
if: github.ref_type == 'tag'
needs: [PackageUbuntu20_04_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: ubuntu-22.04-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
PackageDebian11:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, X64]
timeout-minutes: 60
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11 $BUILD_TYPE
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11
path: build/output/debian-11/memgraph*.deb
PackageDebian11_ARM:
if: github.ref_type == 'tag'
needs: [Debian10, Ubuntu20_04]
runs-on: [self-hosted, DockerMgBuild, ARM64]
# M1 Mac mini is sometimes slower
timeout-minutes: 150
steps:
- name: "Set up repository"
uses: actions/checkout@v4
with:
fetch-depth: 0 # Required because of release/get_version.py
- name: "Build package"
run: |
./release/package/run.sh package debian-11-arm $BUILD_TYPE
- name: "Upload package"
uses: actions/upload-artifact@v4
with:
name: debian-11-aarch64
path: build/output/debian-11-arm/memgraph*.deb
PushToS3Debian11_ARM:
if: github.ref_type == 'tag'
needs: [PackageDebian11_ARM]
runs-on: ubuntu-latest
steps:
- name: Download package
uses: actions/download-artifact@v4
with:
name: debian-11-aarch64
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "deps.memgraph.io"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph-unofficial/${{ github.ref_name }}/"

View File

@ -1,315 +0,0 @@
name: Release CentOS 8
on:
workflow_dispatch:
schedule:
- cron: "0 1 * * *"
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, CentOS8]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
steps:
- name: Set up repository
uses: actions/checkout@v2
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build community binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
coverage_build:
name: "Coverage build"
runs-on: [self-hosted, Linux, X64, CentOS8]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v2
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build coverage binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build coverage binaries.
cd build
cmake -DTEST_COVERAGE=ON ..
make -j$THREADS memgraph__unit
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
- name: Compute code coverage
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Compute code coverage.
cd tools/github
./coverage_convert
# Package code coverage.
cd generated
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v2
with:
name: "Code coverage"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, CentOS8]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
steps:
- name: Set up repository
uses: actions/checkout@v2
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run leftover CTest tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run leftover CTest tests (all except unit and benchmark tests).
cd build
ctest -E "(memgraph__unit|memgraph__benchmark)" --output-on-failure
- name: Run drivers tests
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
cd tests/integration
for name in *; do
if [ ! -d $name ]; then continue; fi
pushd $name >/dev/null
echo "Running: $name"
if [ -x prepare.sh ]; then
./prepare.sh
fi
if [ -x runner.py ]; then
./runner.py
elif [ -x runner.sh ]; then
./runner.sh
fi
echo
popd >/dev/null
done
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run cppcheck and clang-format.
cd tools/github
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v2
with:
name: "Code coverage"
path: tools/github/cppcheck_and_clang_format.txt
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, CentOS8]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
steps:
- name: Set up repository
uses: actions/checkout@v2
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
make -j$THREADS
- name: Create enterprise RPM package
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
cd build
# create mgconsole
# we use the -B to force the build
make -j$THREADS -B mgconsole
# Create enterprise RPM package.
mkdir output && cd output
cpack -G RPM --config ../CPackConfig.cmake
rpmlint memgraph*.rpm
- name: Save enterprise RPM package
uses: actions/upload-artifact@v2
with:
name: "Enterprise RPM package"
path: build/output/memgraph*.rpm
- name: Run micro benchmark tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run micro benchmark tests.
cd build
# The `eval` benchmark needs a large stack limit.
ulimit -s 262144
ctest -R memgraph__benchmark -V
- name: Run macro benchmark tests
run: |
cd tests/macro_benchmark
./harness QuerySuite MemgraphRunner \
--groups aggregation 1000_create unwind_create dense_expand match \
--no-strict
- name: Run parallel macro benchmark tests
run: |
cd tests/macro_benchmark
./harness QueryParallelSuite MemgraphRunner \
--groups aggregation_parallel create_parallel bfs_parallel \
--num-database-workers 9 --num-clients-workers 30 \
--no-strict
- name: Run GQL Behave tests
run: |
cd tests/gql_behave
./continuous_integration
- name: Save quality assurance status
uses: actions/upload-artifact@v2
with:
name: "GQL Behave Status"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
- name: Run e2e tests
run: |
# TODO(gitbuda): Setup mgclient and pymgclient properly.
cd tests
./setup.sh
source ve3/bin/activate
cd e2e
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:../../libs/mgclient/lib python runner.py --workloads-root-directory .
- name: Run stress test (plain)
run: |
cd tests/stress
./continuous_integration
- name: Run stress test (SSL)
run: |
cd tests/stress
./continuous_integration --use-ssl
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset
- name: Run durability test (plain)
run: |
cd tests/stress
source ve3/bin/activate
python3 durability --num-steps 5
- name: Run durability test (large)
run: |
cd tests/stress
source ve3/bin/activate
python3 durability --num-steps 20

View File

@ -1,23 +1,38 @@
name: Release Debian 10
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
schedule:
- cron: "0 1 * * *"
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
env:
OS: "Debian10"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Debian10]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -33,7 +48,7 @@ jobs:
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release -DMG_ENTERPRISE=OFF ..
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Run unit tests
@ -52,10 +67,11 @@ jobs:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -97,22 +113,19 @@ jobs:
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Debian10]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -144,25 +157,6 @@ jobs:
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
cd tests/integration
for name in *; do
if [ ! -d $name ]; then continue; fi
pushd $name >/dev/null
echo "Running: $name"
if [ -x prepare.sh ]; then
./prepare.sh
fi
if [ -x runner.py ]; then
./runner.py
elif [ -x runner.sh ]; then
./runner.sh
fi
echo
popd >/dev/null
done
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
@ -173,23 +167,49 @@ jobs:
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
debug_integration_test:
name: "Debug integration tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run integration tests
run: |
tests/integration/run.sh
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Debian10]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -205,7 +225,7 @@ jobs:
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Create enterprise DEB package
@ -224,11 +244,60 @@ jobs:
cpack -G DEB --config ../CPackConfig.cmake
- name: Save enterprise DEB package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS}}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
release_benchmark_tests:
name: "Release Benchmark Tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run micro benchmark tests
run: |
# Activate toolchain.
@ -255,36 +324,79 @@ jobs:
--num-database-workers 9 --num-clients-workers 30 \
--no-strict
- name: Run GQL Behave tests
run: |
cd tests/gql_behave
./continuous_integration
release_e2e_test:
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
- name: Save quality assurance status
uses: actions/upload-artifact@v2
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
name: "GQL Behave Status"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Run unit tests
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Ensure Kafka and Pulsar are up
run: |
cd tests/e2e/streams/kafka
docker-compose up -d
cd ../pulsar
docker-compose up -d
- name: Run e2e tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh
- name: Ensure Kafka and Pulsar are down
if: always()
run: |
cd tests/e2e/streams/kafka
docker-compose down
cd ../pulsar
docker-compose down
release_durability_stress_tests:
name: "Release durability and stress tests"
runs-on: [self-hosted, Linux, X64, Debian10]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
# Initialize dependencies.
./init
- name: Run e2e tests
run: |
# TODO(gitbuda): Setup mgclient and pymgclient properly.
cd tests
./setup.sh
source ve3/bin/activate
cd e2e
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:../../libs/mgclient/lib python runner.py --workloads-root-directory .
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (plain)
run: |
@ -296,11 +408,6 @@ jobs:
cd tests/stress
./continuous_integration --use-ssl
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset
- name: Run durability test (plain)
run: |
cd tests/stress
@ -316,15 +423,11 @@ jobs:
release_jepsen_test:
name: "Release Jepsen Test"
runs-on: [self-hosted, Linux, X64, Debian10, JepsenControl]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -334,23 +437,27 @@ jobs:
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build only memgraph release binary.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS memgraph
- name: Refresh Jepsen Cluster
run: |
cd tests/jepsen
./run.sh cluster-refresh
- name: Run Jepsen tests
run: |
cd tests/jepsen
./run.sh test --binary ../../build/memgraph --run-args "test-all --node-configs resources/node-config.edn" --ignore-run-stdout-logs --ignore-run-stderr-logs
./run.sh test-all-individually --binary ../../build/memgraph --ignore-run-stdout-logs --ignore-run-stderr-logs
- name: Save Jepsen report
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: "Jepsen Report"
name: "Jepsen Report-${{ env.OS }}"
path: tests/jepsen/Jepsen.tar.gz

View File

@ -19,20 +19,20 @@ jobs:
DOCKER_REPOSITORY_NAME: memgraph
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Download memgraph binary
run: |

View File

@ -0,0 +1,63 @@
name: "Mgbench Bolt Client Publish Docker Image"
on:
workflow_dispatch:
inputs:
version:
description: "Mgbench bolt client version to publish on Dockerhub."
required: true
force_release:
type: boolean
required: false
default: false
jobs:
mgbench_docker_publish:
runs-on: ubuntu-latest
env:
DOCKER_ORGANIZATION_NAME: memgraph
DOCKER_REPOSITORY_NAME: mgbench-client
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Check if specified version is already pushed
run: |
EXISTS=$(docker manifest inspect $DOCKER_ORGANIZATION_NAME/$DOCKER_REPOSITORY_NAME:${{ github.event.inputs.version }} > /dev/null; echo $?)
echo $EXISTS
if [[ ${EXISTS} -eq 0 ]]; then
echo 'The specified version has been already released to DockerHub.'
if [[ ${{ github.event.inputs.force_release }} = true ]]; then
echo 'Forcing the release!'
else
echo 'Stopping the release!'
exit 1
fi
else
echo 'All good the specified version has not been release to DockerHub.'
fi
- name: Build & push docker images
run: |
cd tests/mgbench
docker buildx build \
--build-arg TOOLCHAIN_VERSION=toolchain-v4 \
--platform linux/amd64,linux/arm64 \
--tag $DOCKER_ORGANIZATION_NAME/$DOCKER_REPOSITORY_NAME:${{ github.event.inputs.version }} \
--tag $DOCKER_ORGANIZATION_NAME/$DOCKER_REPOSITORY_NAME:latest \
--file Dockerfile.mgbench_client \
--push .

View File

@ -1,23 +1,38 @@
name: Release Ubuntu 20.04
on:
workflow_call:
inputs:
build_type:
type: string
description: "Memgraph Build type. Default value is Release."
default: 'Release'
workflow_dispatch:
schedule:
- cron: "0 1 * * *"
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
env:
OS: "Ubuntu 20.04"
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
community_build:
name: "Community build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -33,7 +48,7 @@ jobs:
# Build community binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release -DMG_ENTERPRISE=OFF ..
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DMG_ENTERPRISE=OFF ..
make -j$THREADS
- name: Run unit tests
@ -48,14 +63,11 @@ jobs:
coverage_build:
name: "Coverage build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -97,22 +109,19 @@ jobs:
tar -czf code_coverage.tar.gz coverage.json html report.json summary.rmu
- name: Save code coverage
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Coverage build)-${{ env.OS }}"
path: tools/github/generated/code_coverage.tar.gz
debug_build:
name: "Debug build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -144,25 +153,6 @@ jobs:
run: |
./tests/drivers/run.sh
- name: Run integration tests
run: |
cd tests/integration
for name in *; do
if [ ! -d $name ]; then continue; fi
pushd $name >/dev/null
echo "Running: $name"
if [ -x prepare.sh ]; then
./prepare.sh
fi
if [ -x runner.py ]; then
./runner.py
elif [ -x runner.sh ]; then
./runner.sh
fi
echo
popd >/dev/null
done
- name: Run cppcheck and clang-format
run: |
# Activate toolchain.
@ -173,23 +163,49 @@ jobs:
./cppcheck_and_clang_format diff
- name: Save cppcheck and clang-format errors
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Code coverage"
name: "Code coverage(Debug build)-${{ env.OS }}"
path: tools/github/cppcheck_and_clang_format.txt
debug_integration_test:
name: "Debug integration tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build debug binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build debug binaries.
cd build
cmake ..
make -j$THREADS
- name: Run integration tests
run: |
tests/integration/run.sh
release_build:
name: "Release build"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
timeout-minutes: 960
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
@ -205,7 +221,7 @@ jobs:
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=release ..
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Create enterprise DEB package
@ -224,11 +240,60 @@ jobs:
cpack -G DEB --config ../CPackConfig.cmake
- name: Save enterprise DEB package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: "Enterprise DEB package"
name: "Enterprise DEB package-${{ env.OS }}"
path: build/output/memgraph*.deb
- name: Run GQL Behave tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
cd gql_behave
./continuous_integration
- name: Save quality assurance status
uses: actions/upload-artifact@v4
with:
name: "GQL Behave Status-${{ env.OS }}"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
- name: Run unit tests
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
release_benchmark_tests:
name: "Release Benchmark Tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run micro benchmark tests
run: |
# Activate toolchain.
@ -255,36 +320,79 @@ jobs:
--num-database-workers 9 --num-clients-workers 30 \
--no-strict
- name: Run GQL Behave tests
run: |
cd tests/gql_behave
./continuous_integration
release_e2e_test:
name: "Release End-to-end Test"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
- name: Save quality assurance status
uses: actions/upload-artifact@v2
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
name: "GQL Behave Status"
path: |
tests/gql_behave/gql_behave_status.csv
tests/gql_behave/gql_behave_status.html
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Run unit tests
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Ensure Kafka and Pulsar are up
run: |
cd tests/e2e/streams/kafka
docker-compose up -d
cd ../pulsar
docker-compose up -d
- name: Run e2e tests
run: |
cd tests
./setup.sh /opt/toolchain-v4/activate
source ve3/bin/activate_e2e
cd e2e
./run.sh
- name: Ensure Kafka and Pulsar are down
if: always()
run: |
cd tests/e2e/streams/kafka
docker-compose down
cd ../pulsar
docker-compose down
release_durability_stress_tests:
name: "Release durability and stress tests"
runs-on: [self-hosted, Linux, X64, Ubuntu20.04]
timeout-minutes: 60
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Run unit tests.
cd build
ctest -R memgraph__unit --output-on-failure
# Initialize dependencies.
./init
- name: Run e2e tests
run: |
# TODO(gitbuda): Setup mgclient and pymgclient properly.
cd tests
./setup.sh
source ve3/bin/activate
cd e2e
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:../../libs/mgclient/lib python runner.py --workloads-root-directory .
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (plain)
run: |
@ -296,11 +404,6 @@ jobs:
cd tests/stress
./continuous_integration --use-ssl
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset
- name: Run durability test (plain)
run: |
cd tests/stress

View File

@ -0,0 +1,68 @@
name: Stress test large
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}
cancel-in-progress: true
on:
workflow_dispatch:
inputs:
build_type:
type: choice
description: "Memgraph Build type. Default value is Release."
default: 'Release'
options:
- Release
- RelWithDebInfo
push:
tags:
- "v*.*.*-rc*"
- "v*.*-rc*"
schedule:
- cron: "0 22 * * *"
env:
THREADS: 24
MEMGRAPH_ENTERPRISE_LICENSE: ${{ secrets.MEMGRAPH_ENTERPRISE_LICENSE }}
MEMGRAPH_ORGANIZATION_NAME: ${{ secrets.MEMGRAPH_ORGANIZATION_NAME }}
BUILD_TYPE: ${{ github.event.inputs.build_type || 'Release' }}
jobs:
stress_test_large:
name: "Stress test large"
timeout-minutes: 720
strategy:
matrix:
os: [Debian10, Ubuntu20.04]
extra: [BigMemory, Gen8]
exclude:
- os: Debian10
extra: Gen8
- os: Ubuntu20.04
extra: BigMemory
runs-on: [self-hosted, Linux, X64, "${{ matrix.os }}", "${{ matrix.extra }}"]
steps:
- name: Set up repository
uses: actions/checkout@v4
with:
# Number of commits to fetch. `0` indicates all history for all
# branches and tags. (default: 1)
fetch-depth: 0
- name: Build release binaries
run: |
# Activate toolchain.
source /opt/toolchain-v4/activate
# Initialize dependencies.
./init
# Build release binaries.
cd build
cmake -DCMAKE_BUILD_TYPE=$BUILD_TYPE ..
make -j$THREADS
- name: Run stress test (large)
run: |
cd tests/stress
./continuous_integration --large-dataset

32
.github/workflows/upload_to_s3.yaml vendored Normal file
View File

@ -0,0 +1,32 @@
name: Upload Package All artifacts to S3
on:
workflow_dispatch:
inputs:
memgraph_version:
description: "Memgraph version to upload as. Format: 'X.Y.Z'"
required: true
run_number:
description: "# of the package_all workflow run to upload artifacts from. Format: '#XYZ'"
required: true
jobs:
upload-to-s3:
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v4
with:
workflow: package_all.yaml
workflow_conclusion: success
run_number: "${{ github.event.inputs.run_number }}"
path: build/output/release
- name: Upload to S3
uses: jakejarvis/s3-sync-action@v0.5.1
env:
AWS_S3_BUCKET: "download.memgraph.com"
AWS_ACCESS_KEY_ID: ${{ secrets.S3_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "eu-west-1"
SOURCE_DIR: "build/output/release"
DEST_DIR: "memgraph/v${{ github.event.inputs.memgraph_version }}/"

14
.gitignore vendored
View File

@ -16,8 +16,7 @@
.ycm_extra_conf.pyc
.temp/
Testing/
build
build/
/build*/
release/examples/build
cmake-build-*
cmake/DownloadProject/
@ -34,9 +33,6 @@ TAGS
*.fas
*.fasl
# LCP generated C++ files
*.lcp.cpp
src/database/distributed/serialization.hpp
src/database/single_node_ha/serialization.hpp
src/distributed/bfs_rpc_messages.hpp
@ -50,15 +46,11 @@ src/distributed/pull_produce_rpc_messages.hpp
src/distributed/storage_gc_rpc_messages.hpp
src/distributed/token_sharing_rpc_messages.hpp
src/distributed/updates_rpc_messages.hpp
src/query/frontend/ast/ast.hpp
src/query/distributed/frontend/ast/ast_serialization.hpp
src/durability/distributed/state_delta.hpp
src/durability/single_node/state_delta.hpp
src/durability/single_node_ha/state_delta.hpp
src/query/frontend/semantic/symbol.hpp
src/query/distributed/frontend/semantic/symbol_serialization.hpp
src/query/distributed/plan/ops.hpp
src/query/plan/operator.hpp
src/raft/log_entry.hpp
src/raft/raft_rpc_messages.hpp
src/raft/snapshot_metadata.hpp
@ -66,3 +58,7 @@ src/raft/storage_info_rpc_messages.hpp
src/stats/stats_rpc_messages.hpp
src/storage/distributed/rpc/concurrent_id_mapper_rpc_messages.hpp
src/transactions/distributed/engine_rpc_messages.hpp
/tests/manual/js/transaction_timeout/package-lock.json
/tests/manual/js/transaction_timeout/node_modules/
.vscode/
src/query/frontend/opencypher/grammar/.antlr/*

View File

@ -1,24 +1,35 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
rev: v4.4.0
hooks:
- id: check-yaml
args: [--allow-multiple-documents]
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 22.3.0
rev: 23.1.0
hooks:
- id: black
args: # arguments to configure black
- --line-length=120
- --include='\.pyi?$'
# these folders wont be formatted by black
- --exclude="""\.git |
\.__pycache__|
build|
libs|
.cache"""
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
args: ["--profile", "black"]
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v13.0.0
hooks:
- id: clang-format
# - repo: local
# hooks:
# - id: clang-tidy
# name: clang-tidy
# description: Runs clang-tidy and checks for errors
# entry: python ./tools/pre-commit/clang-tidy.py
# language: python
# files: ^src/
# types: [c++, text]
# fail_fast: true
# require_serial: true
# args: [--compile_commands_path=build]
# pass_filenames: false

22
.sonarcloud.properties Normal file
View File

@ -0,0 +1,22 @@
# Path to sources
sonar.sources = .
# sonar.exclusions=
sonar.inclusions=src,include,query_modules
# Path to tests
sonar.tests = tests/
# sonar.test.exclusions=
# sonar.test.inclusions=
# Source encoding
# sonar.sourceEncoding=
# Exclusions for copy-paste detection
# sonar.cpd.exclusions=
# Python version (for python projects only)
# sonar.python.version=
# C++ standard version (for C++ projects only)
# If not specified, it defaults to the latest supported standard
# sonar.cfamily.reportingCppStandardOverride=c++98|c++11|c++14|c++17|c++20

32
ADRs/001_tantivy.md Normal file
View File

@ -0,0 +1,32 @@
# Tantivy ADR
**Author**
Marko Budiselic (github.com/gitbuda)
**Status**
APPROVED
**Date**
January 5, 2024
**Problem**
For some of Memgraph workloads, text search is a required feature. We don't
want to build a new text search engine because that's not Memgraph's core
value.
**Criteria**
- easy integration with our C++ codebase
- ability to operate in-memory and on-disk
- sufficient features (regex, full-text search, fuzzy search, aggregations over
text data)
- production-ready
**Decision**
All known C++ libraries are not production-ready. Recent Rust libraries, in
particular [Tantivy](https://github.com/quickwit-oss/tantivy), seem to provide
much more features, it is production ready. The way how we'll integrate Tantivy
into the current Memgraph codebase is via
[cxx](https://github.com/dtolnay/cxx). **We select Tantivy.**

34
ADRs/002_nuraft.md Normal file
View File

@ -0,0 +1,34 @@
# NuRaft ADR
**Author**
Marko Budiselic (github.com/gitbuda)
**Status**
PROPOSED
**Date**
January 10, 2024
**Problem**
In order to enhance Memgraph to have High Availability features as requested by
customers, we want to have reliable coordinators backed by RAFT consensus algorithm. Implementing
RAFT to be correct and performant is a very challenging task. Skillful Memgraph
engineers already tried 3 times and failed to deliver in a reasonable timeframe
all three times (approximately 4 person-weeks of engineering work each time).
**Criteria**
- easy integration with our C++ codebase
- heavily tested in production environments
- implementation of performance optimizations on top of the canonical Raft
implementation
**Decision**
There are a few, robust C++ implementations of Raft but as a part of other
projects or bigger libraries. **We select
[NuRaft](https://github.com/eBay/NuRaft)** because it focuses on delivering
Raft without bloatware, and it's used by
[Clickhouse](https://github.com/ClickHouse/ClickHouse) (an comparable peer to
Memgraph, a very well-established product).

38
ADRs/003_rocksdb.md Normal file
View File

@ -0,0 +1,38 @@
# RocksDB ADR
**Author**
Marko Budiselic (github.com/gitbuda)
**Status**
ACCEPTED
**Date**
January 23, 2024
**Problem**
Interacting with data (reads and writes) on disk in a concurrent, safe, and
fast way is a challenging task. Implementing all low-level primitives to
interact with various disk hardware efficiently consumes significant
engineering people. Whenever Memgraph has to store data on disk (or any
other colder than RAM storage system), the problem is how to do that in the
least amount of development time while satisfying all functional requirements
(often performance).
**Criteria**
- working efficiently in a highly concurrent environment
- easy integration with Memgraph's C++ codebase
- providing low-level key-value API
- heavily tested in production environments
- providing abstractions for the storage hardware (even for cloud-based
storages like S3)
**Decision**
There are a few robust key-value stores, but finding one that is
production-ready and compatible with Memgraph's C++ codebase is challenging.
**We select [RocksDB](https://github.com/facebook/rocksdb)** because it
delivers robust API to manage data on disk; it's battle-tested in many
production environments (many databases systems are embedding RocksDB), and
it's the most compatible one.

67
ADRs/README.md Normal file
View File

@ -0,0 +1,67 @@
# Architecture Decision Records
Also known as ADRs. This practice has become widespread in many
high performing engineering teams. It is a technique for communicating
between software engineers. ADRs provide a clear and documented
history of architectural choices, ensuring that everyone on the
team is on the same page. This improves communication and reduces
misunderstandings. The act of recording decisions encourages
thoughtful consideration before making choices. This can lead to
more robust and better-informed architectural decisions.
Links must be created, pointing both to and from the Github Issues
and/or the Notion Program Management "Initiative" database.
ADRs are complimentary to any tech specs that get written while
designing a solution. ADRs are very short and to the point, while
tech specs will include diagrams and can be quite verbose.
## HOWTO
Each ADR will be assigned a monotonically increasing unique numeric
identifier, which will be zero-padded to 3 digits. Each ADR will
be in a single markdown file containing no more than one page of
text, and the filename will start with that unique identifier,
followed by a camel case phrase summarizing the problem. For
example: `001_architecture_decision_records.md` or
`002_big_integration_cap_theorem.md`.
We want to use an ADR when:
1. Significant Impact: This includes choices that affect scalability, performance, or fundamental design principles.
1. Long-Term Ramifications: When a decision is expected to have long-term ramifications or is difficult to reverse.
1. Architectural Principles: ADRs are suitable for documenting decisions related to architectural principles, frameworks, or patterns that shape the system's structure.
1. Controversial Choices: When a decision is likely to be controversial or may require justification in the future.
The most senior engineer on a project will evaluate and decide
whether or not an ADR is needed.
## Do
1. Keep them brief and concise.
1. Explain the trade-offs.
1. Each ADR should be about one AD, not multiple ADs
1. Don't alter existing information in an ADR. Instead, amend the ADR by adding new information, or supersede the ADR by creating a new ADR.
1. Explain your organization's situation and business priorities.
1. Include rationale and considerations based on social and skills makeups of your teams.
1. Include pros and cons that are relevant, and describe them in terms that align with your needs and goals.
1. Explain what follows from making the decision. This can include the effects, outcomes, outputs, follow ups, and more.
## Don't
1. Try to guess what the executive leader wants, and then attempt to please them. Be objective.
1. Try to solve everything all at once. A pretty good solution now is MUCH BETTER than a perfect solution later. Carpe diem!
1. Hide any doubts or unanswered questions.
1. Make it a sales pitch. Everything has upsides and downsides - be authentic and honest about them.
1. Perform merely a superficial investigation. If an ADR doesn't call for some deep thinking, then it probably shouldn't exist.
1. Ignore the long-term costs such as performance, tech debt or hardware and maintenance.
1. Get tunnel vision where creative or surprising approaches are not explored.
# Template - use the format below for each new ADR
1. **Author** - who has written the ADR
1. **Status** - one of: PROPOSED, ACCEPTED, REJECTED, SUPERSEDED-BY or DEPRECATED
1. **Date** - when the status was most recently updated
1. **Problem** - a concise paragraph explaining the context
1. **Criteria** - a list of the two or three metrics by which the solution was evaluated, and their relative weights (importance)
1. **Decision** - what was chosen as the way forward, and what the consequences are of the decision

View File

@ -1,6 +1,7 @@
# MemGraph CMake configuration
cmake_minimum_required(VERSION 3.8)
cmake_minimum_required(VERSION 3.12)
cmake_policy(SET CMP0076 NEW)
# !! IMPORTANT !! run ./project_root/init.sh before cmake command
# to download dependencies
@ -18,10 +19,12 @@ set_directory_properties(PROPERTIES CLEAN_NO_CUSTOM TRUE)
# during the code coverage process
find_program(CCACHE_FOUND ccache)
option(USE_CCACHE "ccache:" ON)
message(STATUS "CCache: ${USE_CCACHE}")
if(CCACHE_FOUND AND USE_CCACHE)
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE ccache)
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK ccache)
message(STATUS "CCache: Used")
else ()
message(STATUS "CCache: Not used")
endif(CCACHE_FOUND AND USE_CCACHE)
# choose a compiler
@ -37,7 +40,14 @@ endif()
# -----------------------------------------------------------------------------
project(memgraph)
project(memgraph LANGUAGES C CXX)
#TODO: upgrade to cmake 3.24 + CheckIPOSupported
#cmake_policy(SET CMP0138 NEW)
#include(CheckIPOSupported)
#check_ipo_supported()
#set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_Release TRUE)
#set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RelWithDebInfo TRUE)
# Install licenses.
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/licenses/
@ -143,7 +153,9 @@ endif()
# files used can be seen here:
# https://git-scm.com/book/en/v2/Git-Internals-Git-References
set(git_directory "${CMAKE_SOURCE_DIR}/.git")
if (EXISTS "${git_directory}")
# Check for directory because if the repo is cloned as a git submodule, .git is
# a file and below code doesn't work.
if (IS_DIRECTORY "${git_directory}")
set_property(DIRECTORY APPEND PROPERTY
CMAKE_CONFIGURE_DEPENDS "${git_directory}/HEAD")
file(STRINGS "${git_directory}/HEAD" git_head_data)
@ -158,7 +170,7 @@ endif()
# setup CMake module path, defines path for include() and find_package()
# https://cmake.org/cmake/help/latest/variable/CMAKE_MODULE_PATH.html
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${PROJECT_SOURCE_DIR}/cmake)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
# custom function definitions
include(functions)
# -----------------------------------------------------------------------------
@ -184,7 +196,7 @@ set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall \
-Werror=switch -Werror=switch-bool -Werror=return-type \
-Werror=return-stack-address \
-Wno-c99-designator \
-Wno-c99-designator -Wmissing-field-initializers \
-DBOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT")
# Don't omit frame pointer in RelWithDebInfo, for additional callchain debug.
@ -199,8 +211,13 @@ set(CMAKE_CXX_FLAGS_RELWITHDEBINFO
# ** Static linking is allowed only for executables! **
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++")
# Use gold linker to speedup build
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fuse-ld=gold")
# Use lld linker to speedup build and use less memory.
add_link_options(-fuse-ld=lld)
# NOTE: Moving to latest Clang (probably starting from 15), lld stopped to work
# without explicit link_directories call.
string(REPLACE ":" " " LD_LIBS $ENV{LD_LIBRARY_PATH})
separate_arguments(LD_LIBS)
link_directories(${LD_LIBS})
# release flags
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -DNDEBUG")
@ -223,7 +240,6 @@ else()
endif()
# -----------------------------------------------------------------------------
# default build type is debug
if (NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Debug")
@ -231,7 +247,17 @@ endif()
message(STATUS "CMake build type: ${CMAKE_BUILD_TYPE}")
# -----------------------------------------------------------------------------
set(MG_ARCH "x86_64" CACHE STRING "Host architecture to build Memgraph on. Supported values are x86_64 (default), ARM64.")
add_definitions( -DCMAKE_BUILD_TYPE_NAME="${CMAKE_BUILD_TYPE}")
if (NOT MG_ARCH)
set(MG_ARCH_DESCR "Host architecture to build Memgraph on. Supported values are x86_64, ARM64.")
if (${CMAKE_HOST_SYSTEM_PROCESSOR} MATCHES "aarch64")
set(MG_ARCH "ARM64" CACHE STRING ${MG_ARCH_DESCR})
else()
set(MG_ARCH "x86_64" CACHE STRING ${MG_ARCH_DESCR})
endif()
endif()
message(STATUS "MG_ARCH: ${MG_ARCH}")
# setup external dependencies -------------------------------------------------
@ -250,7 +276,6 @@ endif()
set(libs_dir ${CMAKE_SOURCE_DIR}/libs)
add_subdirectory(libs EXCLUDE_FROM_ALL)
# Optional subproject configuration -------------------------------------------
option(TEST_COVERAGE "Generate coverage reports from running memgraph" OFF)
option(TOOLS "Build tools binaries" ON)
option(QUERY_MODULES "Build query modules containing custom procedures" ON)
@ -258,6 +283,8 @@ option(ASAN "Build with Address Sanitizer. To get a reasonable performance optio
option(TSAN "Build with Thread Sanitizer. To get a reasonable performance option should be used only in Release or RelWithDebInfo build " OFF)
option(UBSAN "Build with Undefined Behaviour Sanitizer" OFF)
# Build feature flags
if (TEST_COVERAGE)
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
if (NOT lower_build_type STREQUAL "debug")
@ -271,12 +298,25 @@ if (MG_ENTERPRISE)
add_definitions(-DMG_ENTERPRISE)
endif()
set(ENABLE_JEMALLOC ON)
option(ENABLE_JEMALLOC "Use jemalloc" ON)
option(MG_MEMORY_PROFILE "If build should be setup for memory profiling" OFF)
if (MG_MEMORY_PROFILE AND ENABLE_JEMALLOC)
message(STATUS "Jemalloc has been disabled because MG_MEMORY_PROFILE is enabled")
set(ENABLE_JEMALLOC OFF)
endif ()
if (MG_MEMORY_PROFILE AND ASAN)
message(STATUS "ASAN has been disabled because MG_MEMORY_PROFILE is enabled")
set(ASAN OFF)
endif ()
if (MG_MEMORY_PROFILE)
add_compile_definitions(MG_MEMORY_PROFILE)
endif ()
if (ASAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
# Enable Addres sanitizer and get nicer stack traces in error messages.
# Enable Address sanitizer and get nicer stack traces in error messages.
# NOTE: AddressSanitizer uses llvm-symbolizer binary from the Clang
# distribution to symbolize the stack traces (note that ideally the
# llvm-symbolizer version must match the version of ASan runtime library).
@ -297,6 +337,8 @@ if (ASAN)
endif()
if (TSAN)
message(WARNING "Disabling jemalloc as it doesn't work well with ASAN")
set(ENABLE_JEMALLOC OFF)
# ThreadSanitizer generally requires all code to be compiled with -fsanitize=thread.
# If some code (e.g. dynamic libraries) is not compiled with the flag, it can
# lead to false positive race reports, false negative race reports and/or
@ -312,7 +354,7 @@ if (TSAN)
# By default ThreadSanitizer uses addr2line utility to symbolize reports.
# llvm-symbolizer is faster, consumes less memory and produces much better
# reports. To use it set runtime flag:
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# TSAN_OPTIONS="extern-symbolizer-path=~/llvm-symbolizer"
# For more runtime flags see: https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
endif()

111
README.md
View File

@ -1,13 +1,9 @@
<p align="center">
<img width="400px" src="https://uploads-ssl.webflow.com/5e7ceb09657a69bdab054b3a/5e7ceb09657a6937ab054bba_Black_Original%20_Logo.png">
<img src="https://public-assets.memgraph.com/github-readme-images/github-memgraph-repo-banner.png">
</p>
---
<p align="center">
Build modern, graph-based applications on top of your streaming data in minutes.
</p>
<p align="center">
<a href="https://github.com/memgraph/memgraph/blob/master/licenses/APL.txt">
<img src="https://img.shields.io/badge/license-APL-green" alt="license" title="license"/>
@ -22,7 +18,7 @@ Build modern, graph-based applications on top of your streaming data in minutes.
<p align="center">
<a href="https://github.com/memgraph/memgraph">
<img src="https://img.shields.io/github/workflow/status/memgraph/memgraph/Release%20Ubuntu%2020.04/master" alt="build" title="build"/>
<img src="https://img.shields.io/github/actions/workflow/status/memgraph/memgraph/release_debian10.yaml?branch=master&label=build%20and%20test&logo=github"/>
</a>
<a href="https://memgraph.com/docs/" alt="Documentation">
<img src="https://img.shields.io/badge/documentation-Memgraph-orange" />
@ -37,9 +33,10 @@ Build modern, graph-based applications on top of your streaming data in minutes.
## :clipboard: Description
Memgraph is a streaming graph application platform that helps you wrangle your
streaming data, build sophisticated models that you can query in real-time, and
develop graph applications.
Memgraph is an open source graph database built for real-time streaming and
compatible with Neo4j. Whether you're a developer or a data scientist with
interconnected data, Memgraph will get you the immediate actionable insights
fast.
Memgraph directly connects to your streaming infrastructure. You can ingest data
from sources like Kafka, SQL, or plain CSV files. Memgraph provides a standard
@ -51,8 +48,20 @@ natural and effective way to model many real-world problems without relying on
complex SQL schemas.
Memgraph is implemented in C/C++ and leverages an in-memory first architecture
to ensure that youre getting the best possible performance consistently and
without surprises. Its also ACID-compliant and highly available.
to ensure that youre getting the [best possible
performance](http://memgraph.com/benchgraph) consistently and without surprises.
Its also ACID-compliant and highly available.
## :zap: Features
- Run Python, Rust, and C/C++ code natively, check out the
[MAGE](https://github.com/memgraph/mage) graph algorithm library
- Native support for machine learning
- Streaming support
- Replication
- Authentication and authorization
- ACID compliance
## :video_game: Memgraph Playground
@ -76,28 +85,49 @@ your browser.
### macOS
[![macOS](https://img.shields.io/badge/macOS-Docker-000000?style=for-the-badge&logo=macos&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-on-macos-docker)
[![macOS](https://img.shields.io/badge/lima-AACF41?style=for-the-badge&logo=macos&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-on-ubuntu)
### Linux
[![Linux](https://img.shields.io/badge/Linux-Docker-FCC624?style=for-the-badge&logo=linux&logoColor=black)](https://memgraph.com/docs/memgraph/install-memgraph-on-linux-docker)
[![Debian](https://img.shields.io/badge/Debian-D70A53?style=for-the-badge&logo=debian&logoColor=white)](https://memgraph.com/docs/memgraph/install-memgraph-on-debian)
[![Ubuntu](https://img.shields.io/badge/Ubuntu-E95420?style=for-the-badge&logo=ubuntu&logoColor=white)](https://memgraph.com/docs/memgraph/install-memgraph-on-ubuntu)
[![Cent
OS](https://img.shields.io/badge/cent%20os-002260?style=for-the-badge&logo=centos&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-from-rpm)
[![Cent OS](https://img.shields.io/badge/cent%20os-002260?style=for-the-badge&logo=centos&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-from-rpm)
[![Fedora](https://img.shields.io/badge/fedora-0B57A4?style=for-the-badge&logo=fedora&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-from-rpm)
[![RedHat](https://img.shields.io/badge/redhat-EE0000?style=for-the-badge&logo=redhat&logoColor=F0F0F0)](https://memgraph.com/docs/memgraph/install-memgraph-from-rpm)
You can find the binaries and Docker images on the [Download
Hub](https://memgraph.com/download) and the installation instructions in the
[official documentation](https://memgraph.com/docs/memgraph/installation).
## :zap: Features
- Run Python, Rust, and C/C++ code natively, check out the
[MAGE](https://github.com/memgraph/mage) graph algorithm library
- Native support for machine learning
- Streaming support
- Replication
- Authentication and authorization
- ACID compliance
## :cloud: Memgraph Cloud
Check out [Memgraph Cloud](https://memgraph.com/docs/memgraph-cloud) - a cloud service fully managed on AWS and available in 6 geographic regions around the world. Memgraph Cloud allows you to create projects with Enterprise instances of MemgraphDB from your browser.
<p align="left">
<a href="https://memgraph.com/docs/memgraph-cloud">
<img width="450px" alt="Memgraph Cloud" src="https://public-assets.memgraph.com/memgraph-gifs%2Fcloud.gif">
</a>
</p>
## :link: Connect to Memgraph
[Connect to the database](https://memgraph.com/docs/memgraph/connect-to-memgraph) using Memgraph Lab, mgconsole, various drivers (Python, C/C++ and others) and WebSocket.
### :microscope: Memgraph Lab
Visualize graphs and play with queries to understand your data. [Memgraph Lab](https://memgraph.com/docs/memgraph-lab) is a user interface that helps you explore and manipulate the data stored in Memgraph. Visualize graphs, execute ad hoc queries, and optimize their performance.
<p align="left">
<a href="https://memgraph.com/docs/memgraph-lab">
<img width="450px" alt="Memgraph Cloud" src="https://public-assets.memgraph.com/memgraph-gifs%2Flab.gif">
</a>
</p>
## :file_folder: Import data
[Import data](https://memgraph.com/docs/memgraph/import-data) into Memgraph using Kafka, RedPanda or Pulsar streams, CSV and JSON files, or Cypher commands.
## :bookmark_tabs: Documentation
@ -111,29 +141,20 @@ guide](https://memgraph.com/docs/memgraph/reference-guide/configuration).
## :trophy: Contributing
The main purpose of this repository is to continue evolving Memgraph, making it
faster and easier to use. Development of Memgraph happens in the open on GitHub,
and we are grateful to the community for contributing bug fixes and
improvements. Read below to learn how you can take part in improving Memgraph.
Welcome to the heart of Memgraph development! We're on a mission to supercharge Memgraph, making it faster, more user-friendly, and even more powerful. We owe a big thanks to our fantastic community of contributors who help us fix bugs and bring incredible improvements to life. If you're passionate about databases and open source, here's your chance to make a difference!
### Explore Memgraph Internals
Interested in the nuts and bolts of Memgraph? Our [internals documentation](https://memgraph.notion.site/Memgraph-Internals-12b69132d67a417898972927d6870bd2) is where you can uncover the inner workings of Memgraph's architecture, learn how to build the project from scratch, and discover the secrets of effective contributions. Dive deep into the database!
### Dive into the Contributing Guide
Ready to jump into the action? Explore our [contributing guide](CONTRIBUTING.md) to get the inside scoop on how we develop Memgraph. It's your roadmap for suggesting bug fixes and enhancements. Contribute your skills and ideas!
### Code of Conduct
Memgraph has adopted a Code of Conduct that we expect project participants to
adhere to. Please read [the full text](CODE_OF_CONDUCT.md) so that you can
understand what actions will and will not be tolerated.
Our commitment to a respectful and professional community is unwavering. Every participant in Memgraph is expected to adhere to a stringent Code of Conduct. Please carefully review [the complete text](CODE_OF_CONDUCT.md) to gain a comprehensive understanding of the behaviors that are both expected and explicitly prohibited.
### Contributing Guide
Read our [contributing guide](CONTRIBUTING.md) to learn about our development
process and how to propose bug fixes and improvements.
### Internals
Read our
[internal](https://memgraph.notion.site/Memgraph-Internals-12b69132d67a417898972927d6870bd2)
docs to learn more about Memgraph's architecture, how to build the project from
source and how to start contributing. All information related to the database,
can be found in the aforementioned docs.
We maintain a zero-tolerance policy towards any violations. Our shared commitment to this Code of Conduct ensures that Memgraph remains a place where integrity and excellence are paramount.
### :scroll: License
@ -141,8 +162,16 @@ Memgraph Community is available under the [BSL
license](./licenses/BSL.txt).</br> Memgraph Enterprise is available under the
[MEL license](./licenses/MEL.txt).
## :busts_in_silhouette: Community
- :purple_heart: [**Discord**](https://discord.gg/memgraph)
- :ocean: [**Stack Overflow**](https://stackoverflow.com/questions/tagged/memgraphdb)
- :bird: [**Twitter**](https://twitter.com/memgraphdb)
- :movie_camera:
[**YouTube**](https://www.youtube.com/channel/UCZ3HOJvHGxtQ_JHxOselBYg)
<p align="center">
<a href="#">
<img src="https://img.shields.io/badge/⬆back_to_top_⬆-white" alt="Back to top" title="Back to top"/>
<img src="https://img.shields.io/badge/⬆️ back_to_top_⬆-white" alt="Back to top" title="Back to top"/>
</a>
</p>

View File

@ -1,55 +0,0 @@
# Try to find jemalloc library
#
# Use this module as:
# find_package(Jemalloc)
#
# or:
# find_package(Jemalloc REQUIRED)
#
# This will define the following variables:
#
# Jemalloc_FOUND True if the system has the jemalloc library.
# Jemalloc_INCLUDE_DIRS Include directories needed to use jemalloc.
# Jemalloc_LIBRARIES Libraries needed to link to jemalloc.
#
# The following cache variables may also be set:
#
# Jemalloc_INCLUDE_DIR The directory containing jemalloc/jemalloc.h.
# Jemalloc_LIBRARY The path to the jemalloc static library.
find_path(Jemalloc_INCLUDE_DIR NAMES jemalloc/jemalloc.h PATH_SUFFIXES include)
find_library(Jemalloc_LIBRARY NAMES libjemalloc.a PATH_SUFFIXES lib)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Jemalloc
FOUND_VAR Jemalloc_FOUND
REQUIRED_VARS
Jemalloc_LIBRARY
Jemalloc_INCLUDE_DIR
)
if(Jemalloc_FOUND)
set(Jemalloc_LIBRARIES ${Jemalloc_LIBRARY})
set(Jemalloc_INCLUDE_DIRS ${Jemalloc_INCLUDE_DIR})
else()
if(Jemalloc_FIND_REQUIRED)
message(FATAL_ERROR "Cannot find jemalloc!")
else()
message(WARNING "jemalloc is not found!")
endif()
endif()
if(Jemalloc_FOUND AND NOT TARGET Jemalloc::Jemalloc)
add_library(Jemalloc::Jemalloc UNKNOWN IMPORTED)
set_target_properties(Jemalloc::Jemalloc
PROPERTIES
IMPORTED_LOCATION "${Jemalloc_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${Jemalloc_INCLUDE_DIR}"
)
endif()
mark_as_advanced(
Jemalloc_INCLUDE_DIR
Jemalloc_LIBRARY
)

67
cmake/Findjemalloc.cmake Normal file
View File

@ -0,0 +1,67 @@
# Try to find jemalloc library
#
# Use this module as:
# find_package(jemalloc)
#
# or:
# find_package(jemalloc REQUIRED)
#
# This will define the following variables:
#
# JEMALLOC_FOUND True if the system has the jemalloc library.
# Jemalloc_INCLUDE_DIRS Include directories needed to use jemalloc.
# Jemalloc_LIBRARIES Libraries needed to link to jemalloc.
#
# The following cache variables may also be set:
#
# Jemalloc_INCLUDE_DIR The directory containing jemalloc/jemalloc.h.
# Jemalloc_LIBRARY The path to the jemalloc static library.
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(jemalloc
FOUND_VAR JEMALLOC_FOUND
REQUIRED_VARS
JEMALLOC_LIBRARY
JEMALLOC_INCLUDE_DIR
)
if(JEMALLOC_INCLUDE_DIR)
message(STATUS "Found jemalloc include dir: ${JEMALLOC_INCLUDE_DIR}")
else()
message(WARNING "jemalloc not found!")
endif()
if(JEMALLOC_LIBRARY)
message(STATUS "Found jemalloc library: ${JEMALLOC_LIBRARY}")
else()
message(WARNING "jemalloc library not found!")
endif()
if(JEMALLOC_FOUND)
set(Jemalloc_LIBRARIES ${JEMALLOC_LIBRARY})
set(Jemalloc_INCLUDE_DIRS ${JEMALLOC_INCLUDE_DIR})
else()
if(Jemalloc_FIND_REQUIRED)
message(FATAL_ERROR "Cannot find jemalloc!")
else()
message(WARNING "jemalloc is not found!")
endif()
endif()
if(JEMALLOC_FOUND AND NOT TARGET Jemalloc::Jemalloc)
message(STATUS "JEMALLOC NOT TARGET")
add_library(Jemalloc::Jemalloc UNKNOWN IMPORTED)
set_target_properties(Jemalloc::Jemalloc
PROPERTIES
IMPORTED_LOCATION "${JEMALLOC_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${JEMALLOC_INCLUDE_DIR}"
)
endif()
mark_as_advanced(
JEMALLOC_INCLUDE_DIR
JEMALLOC_LIBRARY
)

View File

@ -99,10 +99,30 @@ modifications:
value: "SNAPSHOT_ISOLATION"
override: true
- name: "storage_mode"
value: "IN_MEMORY_TRANSACTIONAL"
override: true
- name: "allow_load_csv"
value: "true"
override: false
- name: "storage_parallel_index_recovery"
value: "false"
override: true
- name: "storage_parallel_schema_recovery"
value: "false"
override: true
- name: "storage_enable_schema_metadata"
value: "false"
override: true
- name: "query_callable_mappings_path"
value: "/etc/memgraph/apoc_compatibility_mappings.json"
override: true
undocumented:
- "flag_file"
- "also_log_to_stderr"

View File

@ -5,12 +5,10 @@ import os
import subprocess
import sys
import textwrap
import xml.etree.ElementTree as ET
import yaml
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
CONFIG_FILE = os.path.join(SCRIPT_DIR, "flags.yaml")
WIDTH = 80
@ -18,14 +16,21 @@ WIDTH = 80
def wrap_text(s, initial_indent="# "):
return "\n#\n".join(
map(lambda x: textwrap.fill(x, WIDTH, initial_indent=initial_indent,
subsequent_indent="# "), s.split("\n")))
map(lambda x: textwrap.fill(x, WIDTH, initial_indent=initial_indent, subsequent_indent="# "), s.split("\n"))
)
def extract_flags(binary_path):
ret = {}
data = subprocess.run([binary_path, "--help-xml"],
stdout=subprocess.PIPE).stdout.decode("utf-8")
data = subprocess.run([binary_path, "--help-xml"], stdout=subprocess.PIPE).stdout.decode("utf-8")
# If something is printed out before the help output, it will break the the
# XML parsing -> filter out if something is not XML line because something
# can be logged before gflags output (e.g. during the global objects init).
# This gets called during memgraph build phase to generate default config
# file later installed under /etc/memgraph/memgraph.conf
# NOTE: Don't use \n in the gflags description strings.
# NOTE: Check here if gflags version changes because of the XML format.
data = "\n".join([line for line in data.split("\n") if line.startswith("<")])
root = ET.fromstring(data)
for child in root:
if child.tag == "usage" and child.text.lower().count("warning"):
@ -46,8 +51,7 @@ def apply_config_to_flags(config, flags):
for modification in config["modifications"]:
name = modification["name"]
if name not in flags:
print("WARNING: Flag '" + name + "' missing from binary!",
file=sys.stderr)
print("WARNING: Flag '" + name + "' missing from binary!", file=sys.stderr)
continue
flags[name]["default"] = modification["value"]
flags[name]["override"] = modification["override"]
@ -75,8 +79,9 @@ def extract_sections(flags):
else:
sections.append((current_section, current_flags))
sections.append(("other", other))
assert set(sum(map(lambda x: x[1], sections), [])) == set(flags.keys()), \
"The section extraction algorithm lost some flags!"
assert set(sum(map(lambda x: x[1], sections), [])) == set(
flags.keys()
), "The section extraction algorithm lost some flags!"
return sections
@ -89,8 +94,7 @@ def generate_config_file(sections, flags):
helpstr = flag["meaning"] + " [" + flag["type"] + "]"
ret += wrap_text(helpstr) + "\n"
prefix = "# " if not flag["override"] else ""
ret += prefix + "--" + flag["name"].replace("_", "-") + \
"=" + flag["default"] + "\n\n"
ret += prefix + "--" + flag["name"].replace("_", "-") + "=" + flag["default"] + "\n\n"
ret += "\n"
ret += wrap_text(config["footer"])
return ret.strip() + "\n"
@ -98,13 +102,9 @@ def generate_config_file(sections, flags):
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("memgraph_binary",
help="path to Memgraph binary")
parser.add_argument("output_file",
help="path where to store the generated Memgraph "
"configuration file")
parser.add_argument("--config-file", default=CONFIG_FILE,
help="path to generator configuration file")
parser.add_argument("memgraph_binary", help="path to Memgraph binary")
parser.add_argument("output_file", help="path where to store the generated Memgraph " "configuration file")
parser.add_argument("--config-file", default=CONFIG_FILE, help="path to generator configuration file")
args = parser.parse_args()
flags = extract_flags(args.memgraph_binary)

26
config/mappings.json Normal file
View File

@ -0,0 +1,26 @@
{
"dbms.components": "mgps.components",
"apoc.util.validate": "mgps.validate",
"db.schema.nodeTypeProperties": "schema.NodeTypeOroperties",
"db.schema.relTypeProperties": "schema.RelTypeProperties",
"apoc.coll.contains": "collections.contains",
"apoc.coll.partition": "collections.partition",
"apoc.coll.toSet": "collections.to_set",
"apoc.coll.unionAll": "collections.unionAll",
"apoc.coll.removeAll": "collections.remove_all",
"apoc.coll.union": "collections.union",
"apoc.coll.sum": "collections.sum",
"apoc.coll.pairs": "collections.pairs",
"apoc.map.fromLists": "map.from_lists",
"apoc.map.removeKeys": "map.remove_keys",
"apoc.map.merge": "map.merge",
"apoc.create.nodes": "create.nodes",
"apoc.create.removeProperties": "create.remove_properties",
"apoc.create.node": "create.node",
"apoc.create.removeLabel": "create.remove_label",
"apoc.refactor.invert": "refactor.invert",
"apoc.refactor.cloneNode": "refactor.clone_node",
"apoc.refactor.cloneSubgraph": "refactor.clone_subgraph",
"apoc.refactor.cloneSubgraphFromPath": "refactor.clone_subgraph_from_path",
"apoc.label.exists": "label.exists"
}

View File

@ -0,0 +1,230 @@
# CSV Import Tool Documentation
CSV is a universal and very versatile data format used to store large quantities
of data. Each Memgraph database instance has a CSV import tool installed called
`mg_import_csv`. The CSV import tool should be used for initial bulk ingestion
of data into the database. Upon ingestion, the CSV importer creates a snapshot
that will be used by the database to recover its state on its next startup.
If you are already familiar with the Neo4j bulk import tool, then using the
`mg_import_csv` tool should be easy. The CSV import tool is fully compatible
with the [Neo4j CSV
format](https://neo4j.com/docs/operations-manual/current/tools/import/). If you
already have a pipeline set-up for Neo4j, you should only replace `neo4j-admin
import` with `mg_import_csv`.
## CSV File Format
Each row of a CSV file represents a single entry that should be imported into
the database. Both nodes and relationships can be imported into the database
using CSV files.
Each set of CSV files must have a header that describes the data that is stored
in the CSV files. Each field in the CSV header is in the format
`<name>[:<type>]` which identifies the name that should be used for that column
and the type that should be used for that column. The type is optional and
defaults to `string` (see the following chapter).
Each CSV field must be divided using the delimiter and each CSV field can either
be quoted or unquoted. When the field is quoted, the first and last character in
the field *must* be the quote character. If the field isn't quoted, and a quote
character appears in it, it is treated as a regular character. If a quote
character appears inside a quoted string then the quote character must be
doubled in order to escape it. Line feeds and carriage returns are ignored in
the CSV file, also, the file can't contain a NULL character.
## Properties
Both nodes and relationships can have properties added to them. When importing
properties, the CSV importer uses the name specified in the header of the
corresponding CSV column for the name of the property. A property is designated
by specifying one of the following types in the header:
- `integer`, `int`, `long`, `byte`, `short`: creates an integer property
- `float`, `double`: creates a float property
- `boolean`, `bool`: creates a boolean property
- `string`, `char`: creates a string property
When importing a boolean value, the CSV field should contain exactly the text
`true` to import a `True` boolean value. All other text values are treated as a
boolean value `False`.
If you want to import an array of values, you can do so by appending `[]` to any
of the above types. The values of the array are then determined by splitting
the raw CSV value using the array delimiter character.
Assuming that the array delimiter is `;`, the following example:
```plaintext
first_name,last_name:string,number:integer,aliases:string[]
John,Doe,1,Johnny;Jo;J-man
Melissa,Doe,2,Mel
```
Will yield these results:
```plaintext
CREATE ({first_name: "John", last_name: "Doe", number: 1, aliases: ["Johnny", "Jo", "J-man"]});
CREATE ({first_name: "Melissa", last_name: "Doe", number: 2, aliases: ["Mel"]});
```
### Nodes
When importing nodes, several more types can be specified in the header of the
CSV file (along with all property types):
- `ID`: id of the node that should be used as the node ID when importing
relationships
- `LABEL`: designates that the field contains additional labels for the node
- `IGNORE`: designates that the field should be ignored
The `ID` field type sets the internal ID that will be used for the node when
creating relationships. It is optional and nodes that don't have an ID value
specified will be imported, but can't be connected to any relationships. If you
want to save the ID value as a property in the database, just specify a name for
the ID (`user_id:ID`). If you just want to use the ID during the import, leave
out the name of the field (`:ID`). The `ID` field also supports creating
separate ID spaces. The ID space is specified with the ID space name appended
to the `ID` type in parentheses (`ID(user)`). That allows you to have the same
IDs (by value) for multiple different node files (for example, numbers from 1 to
N). The IDs in each ID space will be treated as an independent set of IDs that
don't interfere with IDs in another ID space.
The `LABEL` field type adds additional labels to the node. The value is treated
as an array type so that multiple additional labels can be specified for each
node. The value is split using the array delimiter (`--array-delimiter` flag).
### Relationships
In order to be able to import relationships, you must import the nodes in the
same invocation of `mg_import_csv` that is used to import the relationships.
When importing relationships, several more types can be specified in the header
of the CSV file (along with all property types):
- `START_ID`: id of the start node that should be connected with the
relationship
- `END_ID`: id of the end node that should be connected with the relationship
- `TYPE`: designates the type of the relationship
- `IGNORE`: designates that the field should be ignored
The `START_ID` field type sets the start node that should be connected with the
relationship to the end node. The field *must* be specified and the node ID
must be one of the node IDs that were specified in the node CSV files. The name
of this field is ignored. If the node ID is in an ID space, you can specify the
ID space for the in the same way as for the node ID (`START_ID(user)`).
The `END_ID` field type sets the end node that should be connected with the
relationship to the start node. The field *must* be specified and the node ID
must be one of the node IDs that were specified in the node CSV files. The name
of this field is ignored. If the node ID is in an ID space, you can specify the
ID space for the in the same way as for the node ID (`END_ID(user)`).
The `TYPE` field type sets the type of the relationship. Each relationship
*must* have a relationship type, but it doesn't necessarily need to be specified
in the CSV file, it can also be set externally for the whole CSV file. The name
of this field is ignored.
## CSV Importer Flags
The importer has many command line options that allow you to customize the way
the importer loads your data.
The two main flags that are used to specify the input CSV files are `--nodes`
and `--relationships`. Basic description of these flags is provided in the table
and more detailed explainion can be found further down bellow.
| Flag | Description |
|-----------------------| -------------- |
|`--nodes` | Used to specify CSV files that contain the nodes to the importer. |
|`--relationships` | Used to specify CSV files that contain the relationships to the importer.|
|`--delimiter` | Sets the delimiter that should be used when splitting the CSV fields (default `,`)|
|`--quote` | Sets the quote character that should be used to quote a CSV field (default `"`)|
|`--array-delimiter` | Sets the delimiter that should be used when splitting array values (default `;`)|
|`--id-type` | Specifies which data type should be used to store the supplied <br /> node IDs when storing them as properties (if the field name is supplied). <br /> The supported values are either `STRING` or `INTEGER`. (default `STRING`)|
|`--ignore-empty-strings` | Instructs the importer to treat all empty strings as `Null` values <br /> instead of an empty string value (default `false`)|
|`--ignore-extra-columns` | Instructs the importer to ignore all columns (instead of raising an error) <br /> that aren't specified after the last specified column in the CSV header. (default `false`) |
| `--skip-bad-relationships`| Instructs the importer to ignore all relationships (instead of raising an error) <br /> that refer to nodes that don't exist in the node files. (default `false`) |
|`--skip-duplicate-nodes` | Instructs the importer to ignore all duplicate nodes (instead of raising an error). <br /> Duplicate nodes are nodes that have an ID that is the same as another node that was already imported. (default `false`) |
| `--trim-strings`| Instructs the importer to trim all of the loaded CSV field values before processing them further. <br /> Trimming the fields removes all leading and trailing whitespace from them. (default `false`) |
The `--nodes` and `--relationships` flags are used to specify CSV files that
contain the nodes and relationships to the importer. Multiple files can be
specified in each supplied `--nodes` or `--relationships` flag. Files that are
supplied in one `--nodes` or `--relationships` flag are treated by the CSV
parser as one big CSV file. Only the first line of the first file is parsed for
the CSV header, all other files (and rows) are treated as data. This is useful
when you have a very large CSV file and don't want to edit its first line just
to add a CSV header. Instead, you can specify the header in a separate file
(e.g. `users_header.csv` or `friendships_header.csv`) and have the data intact
in the large file (e.g. `users.csv` or `friendships.csv`). Also, you can supply
additional labels for each set of node files.
The format of `--nodes` flag is:
`[<label>[:<label>]...=]<file>[,<file>][,<file>]...`. Take note that only the
first `<file>` part is mandatory, all other parts of the flag value are
optional. Multiple `--nodes` flags can be supplied to describe multiple sets of
different node files. For the importer to work, at least one `--nodes` flag
*must* be supplied.
The format of `--relationships` flag is: `[<type>=]<file>[,<file>][,<file>]...`.
Take note that only the first `<file>` part is mandatory, all other parts of the
flag value are optional. Multiple `--relationships` flags can be supplied to
describe multiple sets of different relationship files. The `--relationships`
flag isn't mandatory.
## CSV Parser Logic
The CSV parser uses the same logic as the standard Python CSV parser. The data
is parsed in the same way as the following snippet:
```python
import csv
for row in csv.reader(stream, strict=True):
# process 'row'
```
Python uses 'excel' as the default dialect when parsing CSV files and the
default settings for the CSV parser are:
- delimiter: `','`
- doublequote: `True`
- escapechar: `None`
- lineterminator: `'\r\n'`
- quotechar: `'"'`
- skipinitialspace: `False`
The above snippet can be expanded to:
```python
import csv
for row in csv.reader(stream, delimiter=',', doublequote=True,
escapechar=None, lineterminator='\r\n',
quotechar='"', skipinitialspace=False,
strict=True):
# process 'row'
```
For more information about the meaning of the above values, see:
https://docs.python.org/3/library/csv.html#csv.Dialect
## Errors
1. [Skipping duplicate node with ID '{}'. For more details, visit:
memgr.ph/csv-import-tool.](#error-1)
2. [Skipping bad relationship with START_ID '{}'. For more details, visit:
memgr.ph/csv-import-tool.](#error-2)
3. [Skipping bad relationship with END_ID '{}'. For more details, visit:
memgr.ph/csv-import-tool.](#error-3)
## Skipping duplicate node with ID {} {#error-1}
Duplicate nodes are nodes that have an ID that is the same as another node that
was already imported. You can instruct the importer to ignore all duplicate
nodes (instead of raising an error) by using the `--skip-duplicate-nodes` flag.
## Skipping bad relationship with START_ID {} {#error-2}
A node with the id `START_ID` doesn't exist. You can instruct the importer to
ignore all bad relationships (instead of raising an error) that refer to nodes
that don't exist in the node files by using the `--skip-bad-relationships` flag.
## Skipping bad relationship with END_ID {} {#error-3}
A node with the id `END_ID` doesn't exist. You can instruct the importer to
ignore all bad relationships (instead of raising an error) that refer to nodes
that don't exist in the node files by using the `--skip-bad-relationships` flag.

View File

@ -1,2 +0,0 @@
archives
build

15
environment/README.md Normal file
View File

@ -0,0 +1,15 @@
# Memgraph Operating Environments
## Issues related to build toolchain
* GCC 11.2 (toolchain-v4) doesn't compile on Fedora 38, multiple definitions of enum issue
* spdlog 1.10/11 doesn't work with fmt 10.0.0
## os
Under the `os` directory, you can find scripts to install all required system
dependencies on operating systems where Memgraph natively builds. The testing
script helps to see how to install all packages (in the case of a new package),
or make any adjustments in the overall system setup. Also, the testing script
helps check if Memgraph runs on a freshly installed operating system (with no
packages installed).

View File

@ -1,3 +1,6 @@
*.deb
*.deb.*
*.rpm
*.rpm.*
*.tar.gz
*.tar.gz.*

190
environment/os/amzn-2.sh Executable file
View File

@ -0,0 +1,190 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "amzn-2"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
git gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
patch
libipt libipt-devel # intel
perl # for openssl
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python3-pip nmap-ncat # for tests
#
# IMPORTANT: python3-yaml does NOT exist on CentOS
# Install it using `pip3 install PyYAML`
#
PyYAML # Package name here does not correspond to the yum package!
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang custom-golang1.18.9 zip unzip java-11-openjdk-devel jdk-17 custom-maven3.9.3 # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
# On Fedora yum/dnf and python10 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
LD_LIBRARY_PATH=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "PyYAML" ]; then
if ! python3 -c "import yaml" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
continue
fi
if ! yum list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == jdk-17 ]; then
if ! yum list installed jdk-17 >/dev/null 2>/dev/null; then
wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.rpm
rpm -Uvh jdk-17_linux-x64_bin.rpm
# NOTE: Set Java 11 as default.
update-alternatives --set java java-11-openjdk.x86_64
update-alternatives --set javac java-11-openjdk.x86_64
fi
continue
fi
if [ "$pkg" == libipt ]; then
if ! yum list installed libipt >/dev/null 2>/dev/null; then
yum install -y http://repo.okay.com.mx/centos/8/x86_64/release/libipt-1.6.1-8.el8.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libipt-devel ]; then
if ! yum list installed libipt-devel >/dev/null 2>/dev/null; then
yum install -y http://repo.okay.com.mx/centos/8/x86_64/release/libipt-devel-1.6.1-8.el8.x86_64.rpm
fi
continue
fi
if [ "$pkg" == nodejs ]; then
if ! yum list installed nodejs >/dev/null 2>/dev/null; then
yum install https://rpm.nodesource.com/pub_16.x/nodistro/repo/nodesource-release-nodistro-1.noarch.rpm -y
yum install nodejs -y --setopt=nodesource-nodejs.module_hotfixes=1
fi
continue
fi
if [ "$pkg" == PyYAML ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install --user PyYAML
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install --user PyYAML"
fi
continue
fi
if [ "$pkg" == java-11-openjdk ]; then
amazon-linux-extras install -y java-openjdk11
continue
fi
if [ "$pkg" == java-11-openjdk-devel ]; then
amazon-linux-extras install -y java-openjdk11
yum install -y java-11-openjdk-devel
continue
fi
yum install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -20,7 +18,7 @@ TOOLCHAIN_BUILD_DEPS=(
curl # snappy
readline-devel # cmake and llvm
libffi-devel libxml2-devel perl-Digest-MD5 # llvm
libedit-devel pcre-devel automake bison # swig
libedit-devel pcre-devel pcre2-devel automake bison # swig
file
openssl-devel
gmp-devel
@ -39,12 +37,13 @@ TOOLCHAIN_RUN_DEPS=(
)
MEMGRAPH_BUILD_DEPS=(
make pkgconfig # build system
make cmake pkgconfig # build system
curl wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
openssl
libseccomp-devel
python3 python-virtualenv python3-pip nmap-ncat # for qa, macro_benchmark and stress tests
#
@ -56,9 +55,21 @@ MEMGRAPH_BUILD_DEPS=(
sbcl # for custom Lisp C++ preprocessing
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which mono-complete dotnet-sdk-3.1 golang nodejs zip unzip java-11-openjdk-devel # for driver tests
which mono-complete dotnet-sdk-3.1 golang custom-golang1.18.9 # for driver tests
nodejs zip unzip java-11-openjdk-devel jdk-17 custom-maven3.9.3 # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -68,6 +79,18 @@ list() {
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == git ]; then
if ! which "git" >/dev/null; then
missing="git $missing"
@ -110,7 +133,25 @@ install() {
yum update -y
yum install -y wget python3 python3-pip
yum install -y git
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == jdk-17 ]; then
if ! yum list installed jdk-17 >/dev/null 2>/dev/null; then
wget https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.rpm
rpm -ivh jdk-17_linux-x64_bin.rpm
update-alternatives --set java java-11-openjdk.x86_64
update-alternatives --set javac java-11-openjdk.x86_64
fi
continue
fi
if [ "$pkg" == libipt ]; then
if ! yum list installed libipt >/dev/null 2>/dev/null; then
yum install -y http://repo.okay.com.mx/centos/8/x86_64/release/libipt-1.6.1-8.el8.x86_64.rpm

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -9,15 +7,17 @@ check_operating_system "centos-9"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel automake bison # for swig
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
@ -40,7 +40,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkgconf-pkg-config # build system
make cmake pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
@ -56,10 +56,22 @@ MEMGRAPH_BUILD_DEPS=(
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang zip unzip java-11-openjdk-devel # for driver tests
which nodejs golang custom-golang1.18.9 # for driver tests
zip unzip java-11-openjdk-devel java-17-openjdk java-17-openjdk-devel custom-maven3.9.3 # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -69,6 +81,18 @@ list() {
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "PyYAML" ]; then
if ! python3 -c "import yaml" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
@ -101,9 +125,20 @@ install() {
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
# --nobest is used because of libipt because we install custom versions
# because libipt-devel is not available on CentOS 9 Stream
yum update -y --nobest
yum install -y wget git python3 python3-pip
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
# Since there is no support for libipt-devel for CentOS 9 we install
# Fedoras version of same libs, they are the same version but released
# for different OS

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-10"
check_architecture "x86_64"
@ -24,7 +24,7 @@ TOOLCHAIN_BUILD_DEPS=(
libgmp-dev # for gdb
gperf # for proxygen
git # for fbthrift
libedit-dev libpcre3-dev automake bison # for swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
)
TOOLCHAIN_RUN_DEPS=(
@ -40,7 +40,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
@ -53,10 +53,19 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless # for driver tests
dotnet-sdk-3.1 golang nodejs npm
mono-runtime mono-mcs zip unzip default-jdk-headless oracle-java17-installer custom-maven3.9.3 # for driver tests
dotnet-sdk-3.1 golang custom-golang1.18.9 nodejs npm # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -64,7 +73,28 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
@ -75,8 +105,15 @@ deb http://deb.debian.org/debian/ buster-updates main contrib non-free
deb-src http://deb.debian.org/debian/ buster-updates main contrib non-free
deb http://security.debian.org/debian-security buster/updates main contrib non-free
deb-src http://security.debian.org/debian-security buster/updates main contrib non-free
EOF
apt --allow-releaseinfo-change update
cat >/etc/apt/sources.list.d/java.list << EOF
deb http://ppa.launchpad.net/linuxuprising/java/ubuntu bionic main
deb-src http://ppa.launchpad.net/linuxuprising/java/ubuntu bionic main
EOF
cd "$DIR"
apt install -y gnupg
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EA8CACC073C3DB2A
apt --allow-releaseinfo-change update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
@ -85,8 +122,26 @@ EOF
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == oracle-java17-installer ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
echo oracle-java17-installer shared/accepted-oracle-license-v1-3 select true | /usr/bin/debconf-set-selections
echo oracle-java17-installer shared/accepted-oracle-license-v1-3 seen true | /usr/bin/debconf-set-selections
apt install -y "$pkg"
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
fi
continue
fi
if [ "$pkg" == dotnet-sdk-3.1 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb

View File

@ -1,12 +1,12 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "debian-11"
check_architecture "arm64"
check_architecture "arm64" "aarch64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
@ -18,7 +18,7 @@ TOOLCHAIN_BUILD_DEPS=(
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre3-dev automake bison # for swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
@ -54,10 +54,19 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless # for driver tests
golang nodejs npm
mono-runtime mono-mcs zip unzip default-jdk-headless openjdk-17-jdk custom-maven3.9.3 # for driver tests
golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -65,7 +74,28 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
@ -89,7 +119,25 @@ EOF
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == openjdk-17-jdk ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-arm64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-arm64/bin/javac
fi
continue
fi
apt install -y "$pkg"
done
}

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -18,7 +16,7 @@ TOOLCHAIN_BUILD_DEPS=(
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre3-dev automake bison # for swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
@ -41,7 +39,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
@ -54,10 +52,21 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless # for driver tests
dotnet-sdk-3.1 golang nodejs npm
mono-runtime mono-mcs zip unzip default-jdk-headless openjdk-17-jdk custom-maven3.9.3 # for driver tests
dotnet-sdk-3.1 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -65,7 +74,28 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
@ -89,7 +119,25 @@ EOF
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == openjdk-17-jdk ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
fi
continue
fi
if [ "$pkg" == dotnet-sdk-3.1 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb

134
environment/os/debian-12-arm.sh Executable file
View File

@ -0,0 +1,134 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "arm64" "aarch64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

136
environment/os/debian-12.sh Executable file
View File

@ -0,0 +1,136 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "debian-12"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev libipt-dev libbabeltrace-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
curl # snappy
file # for libunwind
libssl-dev # for libevent
libgmp-dev
gperf # for proxygen
git # for fbthrift
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 libipt2 libbabeltrace1 liblzma5 python3 # for gdb
libcurl4 # for cmake
file # for CPack
libreadline8 # for cmake and llvm
libffi8 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat-traditional # tests are using nc to wait for memgraph
python3 virtualenv python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-7.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-7.0 ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-7.0
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

150
environment/os/fedora-36.sh Executable file
View File

@ -0,0 +1,150 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "fedora-36"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv python3-virtualenvwrapper python3-pyyaml nmap-ncat # for tests
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
java-11-openjdk-devel java-17-openjdk-devel custom-maven3.9.3 # for driver tests
which zip unzip
nodejs golang custom-golang1.18.9 # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
if [ -v LD_LIBRARY_PATH ]; then
# On Fedora yum/dnf and python10 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
LD_LIBRARY_PATH=""
fi
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
if [ -v OLD_LD_LIBRARY_PATH ]; then
echo "Restoring LD_LIBRARY_PATH..."
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
dnf update -y
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == java-17-openjdk-devel ]; then
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
dnf install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java java-11-openjdk.x86_64
update-alternatives --set javac java-11-openjdk.x86_64
fi
continue
fi
dnf install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

117
environment/os/fedora-38.sh Executable file
View File

@ -0,0 +1,117 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "fedora-38"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv python3-virtualenvwrapper python3-pyyaml nmap-ncat # for tests
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang zip unzip java-11-openjdk-devel # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
if [ -v LD_LIBRARY_PATH ]; then
# On Fedora 38 yum/dnf and python11 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
LD_LIBRARY_PATH=""
fi
local missing=""
for pkg in $1; do
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
if [ -v OLD_LD_LIBRARY_PATH ]; then
echo "Restoring LD_LIBRARY_PATH..."
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
dnf update -y
for pkg in $1; do
dnf install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

117
environment/os/fedora-39.sh Executable file
View File

@ -0,0 +1,117 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "fedora-39"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
coreutils-common gcc gcc-c++ make # generic build tools
wget # used for archive download
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel texinfo libbabeltrace-devel # for gdb
curl libcurl-devel # for cmake
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
perl # for openssl
git
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv python3-virtualenvwrapper python3-pyyaml nmap-ncat # for tests
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang zip unzip java-11-openjdk-devel # for driver tests
sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
if [ -v LD_LIBRARY_PATH ]; then
# On Fedora 38 yum/dnf and python11 use newer glibc which is not compatible
# with ours, so we need to momentarely disable env
local OLD_LD_LIBRARY_PATH=${LD_LIBRARY_PATH}
LD_LIBRARY_PATH=""
fi
local missing=""
for pkg in $1; do
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
if [ -v OLD_LD_LIBRARY_PATH ]; then
echo "Restoring LD_LIBRARY_PATH..."
LD_LIBRARY_PATH=${OLD_LD_LIBRARY_PATH}
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests don't work without the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
dnf update -y
for pkg in $1; do
dnf install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

212
environment/os/rocky-9.3.sh Executable file
View File

@ -0,0 +1,212 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# TODO(gitbuda): Rocky gets automatically updates -> figure out how to handle it.
check_operating_system "rocky-9.3"
check_architecture "x86_64"
TOOLCHAIN_BUILD_DEPS=(
wget # used for archive download
coreutils-common gcc gcc-c++ make # generic build tools
# NOTE: Pure libcurl conflicts with libcurl-minimal
libcurl-devel # cmake build requires it
gnupg2 # used for archive signature verification
tar gzip bzip2 xz unzip # used for archive unpacking
zlib-devel # zlib library used for all builds
expat-devel xz-devel python3-devel perl-Unicode-EastAsianWidth texinfo libbabeltrace-devel # for gdb
readline-devel # for cmake and llvm
libffi-devel libxml2-devel # for llvm
libedit-devel pcre-devel pcre2-devel automake bison # for swig
file
openssl-devel
gmp-devel
gperf
diffutils
libipt libipt-devel # intel
patch
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz # used for archive unpacking
zlib # zlib library used for all builds
expat xz-libs python3 # for gdb
readline # for cmake and llvm
libffi libxml2 # for llvm
openssl-devel
perl # for openssl
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkgconf-pkg-config # build system
wget # for downloading libs
libuuid-devel java-11-openjdk # required by antlr
readline-devel # for memgraph console
python3-devel # for query modules
openssl-devel
libseccomp-devel
python3 python3-pip python3-virtualenv nmap-ncat # for qa, macro_benchmark and stress tests
#
# IMPORTANT: python3-yaml does NOT exist on CentOS
# Install it manually using `pip3 install PyYAML`
#
PyYAML # Package name here does not correspond to the yum package!
libcurl-devel # mg-requests
rpm-build rpmlint # for RPM package building
doxygen graphviz # source documentation generators
which nodejs golang custom-golang1.18.9 # for driver tests
zip unzip java-11-openjdk-devel java-17-openjdk java-17-openjdk-devel custom-maven3.9.3 # for driver tests
cl-asdf common-lisp-controller sbcl # for custom Lisp C++ preprocessing
autoconf # for jemalloc code generation
libtool # for protobuf code generation
cyrus-sasl-devel
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "PyYAML" ]; then
if ! python3 -c "import yaml" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == "python3-virtualenv" ]; then
continue
fi
if ! yum list installed "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
if [ "$EUID" -ne 0 ]; then
echo "Please run as root."
exit 1
fi
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
yum update -y
yum install -y wget git python3 python3-pip
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == perl-Unicode-EastAsianWidth ]; then
if ! dnf list installed perl-Unicode-EastAsianWidth >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/p/perl-Unicode-EastAsianWidth-12.0-7.el9.noarch.rpm
fi
continue
fi
if [ "$pkg" == texinfo ]; then
if ! dnf list installed texinfo >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/t/texinfo-6.7-15.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libbabeltrace-devel ]; then
if ! dnf list installed libbabeltrace-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libbabeltrace-devel-1.5.8-10.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == libipt-devel ]; then
if ! dnf list installed libipt-devel >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/devel/x86_64/os/Packages/l/libipt-devel-2.0.4-5.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == doxygen ]; then
if ! dnf list installed doxygen >/dev/null 2>/dev/null; then
dnf install -y https://dl.rockylinux.org/pub/rocky/9/CRB/x86_64/os/Packages/d/doxygen-1.9.1-11.el9.x86_64.rpm
fi
continue
fi
if [ "$pkg" == cl-asdf ]; then
if ! dnf list installed cl-asdf >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/cl-asdf-20101028-18.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == common-lisp-controller ]; then
if ! dnf list installed common-lisp-controller >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/common-lisp-controller-7.4-20.el8.noarch.rpm
fi
continue
fi
if [ "$pkg" == sbcl ]; then
if ! dnf list installed sbcl >/dev/null 2>/dev/null; then
dnf install -y https://pkgs.sysadmins.ws/el8/base/x86_64/sbcl-2.0.1-4.el8.x86_64.rpm
fi
continue
fi
if [ "$pkg" == PyYAML ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install --user PyYAML
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install --user PyYAML"
fi
continue
fi
if [ "$pkg" == python3-virtualenv ]; then
if [ -z ${SUDO_USER+x} ]; then # Running as root (e.g. Docker).
pip3 install virtualenv
pip3 install virtualenvwrapper
else # Running using sudo.
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenv"
sudo -H -u "$SUDO_USER" bash -c "pip3 install virtualenvwrapper"
fi
continue
fi
yum install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

158
environment/os/run.sh Executable file
View File

@ -0,0 +1,158 @@
#!/bin/bash
set -Eeuo pipefail
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
IFS=' '
# NOTE: docker_image_name could be local image build based on release/package images.
# NOTE: each line has to be under quotes, docker_container_type, script_name and docker_image_name separate with a space.
# "docker_container_type script_name docker_image_name"
# docker_container_type OPTIONS:
# * mgrun -> running plain/empty operating system for the purposes of testing native memgraph package
# * mgbuild -> running the builder container to build memgraph inside it -> it's possible create builder images using release/package/run.sh
OPERATING_SYSTEMS=(
# "mgrun amzn-2 amazonlinux:2"
# "mgrun centos-7 centos:7"
# "mgrun centos-9 dokken/centos-stream-9"
# "mgrun debian-10 debian:10"
# "mgrun debian-11 debian:11"
# "mgrun fedora-36 fedora:36"
# "mgrun ubuntu-18.04 ubuntu:18.04"
# "mgrun ubuntu-20.04 ubuntu:20.04"
# "mgrun ubuntu-22.04 ubuntu:22.04"
# "mgbuild debian-12 memgraph/memgraph-builder:v5_debian-12"
)
if [ ! "$(docker info)" ]; then
echo "ERROR: Docker is required"
exit 1
fi
print_help () {
echo -e "$0 all\t\t\t\t => start + init all containers in the background"
echo -e "$0 check\t\t\t\t => check all containers"
echo -e "$0 delete\t\t\t\t => stop + remove all containers"
echo -e "$0 copy src_container dst_container => copy build package from src to dst container"
exit 1
}
# NOTE: This is an idempotent operation!
# TODO(gitbuda): Consider making docker_run always delete + start a new container or add a new function.
docker_run () {
cnt_type="$1"
if [[ "$cnt_type" != "mgbuild" && "$cnt_type" != "mgrun" ]]; then
echo "ERROR: Wrong docker_container_type -> valid options are mgbuild, mgrun"
exit 1
fi
cnt_name="$2"
cnt_image="$3"
if [ ! "$(docker ps -q -f name=$cnt_name)" ]; then
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
echo "Cleanup of the old exited container..."
docker rm $cnt_name
fi
if [[ "$cnt_type" == "mgbuild" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image"
fi
if [[ "$cnt_type" == "mgrun" ]]; then
docker run -d --volume "$SCRIPT_DIR/../../:/memgraph" --network host --name "$cnt_name" "$cnt_image" sleep infinity
fi
fi
echo "The $cnt_image container is active under $cnt_name name!"
}
docker_exec () {
cnt_name="$1"
cnt_cmd="$2"
docker exec -it "$cnt_name" bash -c "$cnt_cmd"
}
docker_stop_and_rm () {
cnt_name="$1"
if [ "$(docker ps -q -f name=$cnt_name)" ]; then
docker stop "$1"
fi
if [ "$(docker ps -aq -f status=exited -f name=$cnt_name)" ]; then
docker rm "$1"
fi
}
# TODO(gitbuda): Make the call to `install NEW_DEPS` configurable, the question what else is useful?
start_all () {
for script_docker_pair in "${OPERATING_SYSTEMS[@]}"; do
read -a script_docker <<< "$script_docker_pair"
docker_container_type="${script_docker[0]}"
script_name="${script_docker[1]}"
docker_image="${script_docker[2]}"
docker_name="${docker_container_type}_$script_name"
echo ""
echo "~~~~ OPERATING ON $docker_image as $docker_name..."
docker_run "$docker_container_type" "$docker_name" "$docker_image"
docker_exec "$docker_name" "/memgraph/environment/os/$script_name.sh install NEW_DEPS"
echo "---- DONE EVERYHING FOR $docker_image as $docker_name..."
echo ""
done
}
check_all () {
for script_docker_pair in "${OPERATING_SYSTEMS[@]}"; do
read -a script_docker <<< "$script_docker_pair"
docker_container_type="${script_docker[0]}"
script_name="${script_docker[1]}"
docker_image="${script_docker[2]}"
docker_name="${docker_container_type}_$script_name"
echo ""
echo "~~~~ OPERATING ON $docker_image as $docker_name..."
docker_exec "$docker_name" "/memgraph/environment/os/$script_name.sh check NEW_DEPS"
echo "---- DONE EVERYHING FOR $docker_image as $docker_name..."
echo ""
done
}
delete_all () {
for script_docker_pair in "${OPERATING_SYSTEMS[@]}"; do
read -a script_docker <<< "$script_docker_pair"
docker_container_type="${script_docker[0]}"
script_name="${script_docker[1]}"
docker_image="${script_docker[2]}"
docker_name="${docker_container_type}_$script_name"
docker_stop_and_rm "$docker_name"
echo "~~~~ $docker_image as $docker_name DELETED"
done
}
# TODO(gitbuda): Copy file between containers is a useful util, also delete, + consider copying of a whole folder.
# TODO(gitbuda): Add args: src_cnt dst_cnt abs_path; both file and recursive folder, always delete + copy.
copy_build_package () {
src_container="$1"
dst_container="$2"
src="$src_container:/memgraph/build/output"
tmp_dst="$SCRIPT_DIR/../../build"
mkdir -p "$tmp_dst"
rm -rf "$tmp_dst/output"
dst="$dst_container:/"
docker cp "$src" "$tmp_dst"
docker cp "$tmp_dst/output" "$dst"
}
if [ "$#" -eq 0 ]; then
print_help
else
case $1 in
all)
start_all
;;
check)
check_all
;;
delete)
delete_all
;;
copy) # src_container dst_container
if [ "$#" -ne 3 ]; then
print_help
fi
copy_build_package "$2" "$3"
;;
*)
print_help
;;
esac
fi

View File

@ -1,11 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "todo-os-name"
check_architecture "todo-arch-name"
TOOLCHAIN_BUILD_DEPS=(
pkg
@ -19,6 +18,20 @@ MEMGRAPH_BUILD_DEPS=(
pkg
)
MEMGRAPH_TEST_DEPS=(
pkg
)
MEMGRAPH_RUN_DEPS=(
pkg
)
# NEW_DEPS is useful when you won't to test the installation of a new package.
# During the test you can put here packages like wget curl tar gzip
NEW_DEPS=(
pkg
)
list() {
echo "$1"
}

View File

@ -1,10 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
# IMPORTANT: Deprecated since memgraph v2.12.0.
check_operating_system "ubuntu-18.04"
check_architecture "x86_64"
@ -25,7 +25,7 @@ TOOLCHAIN_BUILD_DEPS=(
libgmp-dev # for gdb
gperf # for proxygen
libssl-dev
libedit-dev libpcre3-dev automake bison # swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # swig
)
TOOLCHAIN_RUN_DEPS=(
@ -41,7 +41,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
make cmake pkg-config # build system
curl wget # downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # memgraph console
@ -53,9 +53,19 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs nodejs zip unzip default-jdk-headless # driver tests
mono-runtime mono-mcs nodejs zip unzip default-jdk-headless openjdk-17-jdk-headless custom-maven3.9.3 # driver tests
custom-golang1.18.9 # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -63,11 +73,53 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
apt install -y $1
apt update -y
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == openjdk-17-jdk-headless ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -24,7 +22,7 @@ TOOLCHAIN_BUILD_DEPS=(
libgmp-dev # for gdb
gperf # for proxygen
libssl-dev
libedit-dev libpcre3-dev automake bison # for swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
)
TOOLCHAIN_RUN_DEPS=(
@ -40,7 +38,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
@ -53,10 +51,21 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless # for driver tests
dotnet-sdk-3.1 golang nodejs npm
mono-runtime mono-mcs zip unzip default-jdk-headless openjdk-17-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-3.1 golang custom-golang1.18.9 nodejs npm # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -64,12 +73,35 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
export DEBIAN_FRONTEND=noninteractive
apt update -y
apt install -y wget
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
@ -77,8 +109,16 @@ install() {
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-3.1 ]; then
if ! dpkg -s dotnet-sdk-3.1 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
@ -88,6 +128,15 @@ install() {
fi
continue
fi
if [ "$pkg" == openjdk-17-jdk-headless ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
fi
continue
fi
apt install -y "$pkg"
done
}

View File

@ -0,0 +1,144 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
check_operating_system "ubuntu-22.04"
check_architecture "arm64" "aarch64"
TOOLCHAIN_BUILD_DEPS=(
coreutils gcc g++ build-essential make # generic build tools
wget # used for archive download
gnupg # used for archive signature verification
tar gzip bzip2 xz-utils unzip # used for archive unpacking
zlib1g-dev # zlib library used for all builds
libexpat1-dev libbabeltrace-dev liblzma-dev python3-dev texinfo # for gdb
libcurl4-openssl-dev # for cmake
libreadline-dev # for cmake and llvm
libffi-dev libxml2-dev # for llvm
curl # snappy
file
git # for thrift
libgmp-dev # for gdb
gperf # for proxygen
libssl-dev
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
)
TOOLCHAIN_RUN_DEPS=(
make # generic build tools
tar gzip bzip2 xz-utils # used for archive unpacking
zlib1g # zlib library used for all builds
libexpat1 libbabeltrace1 liblzma5 python3 # for gdb
libcurl4 # for cmake
libreadline8 # for cmake and llvm
libffi7 libxml2 # for llvm
libssl-dev # for libevent
)
MEMGRAPH_BUILD_DEPS=(
git # source code control
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
libpython3-dev python3-dev # for query modules
libssl-dev
libseccomp-dev
netcat # tests are using nc to wait for memgraph
python3 python3-virtualenv python3-pip # for qa, macro_benchmark and stress tests
python3-yaml # for the configuration generator
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless openjdk-17-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-6.0 golang custom-golang1.18.9 nodejs npm
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
echo "$1"
}
check() {
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
echo "LANG=en_US.utf8" >> /home/gh/actions-runner/.env
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-6.0 ]; then
if ! dpkg -s dotnet-sdk-6.0 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
dpkg -i packages-microsoft-prod.deb
apt-get update
apt-get install -y apt-transport-https dotnet-sdk-6.0
fi
continue
fi
if [ "$pkg" == openjdk-17-jdk-headless ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-arm64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-arm64/bin/javac
fi
continue
fi
apt install -y "$pkg"
done
}
deps=$2"[*]"
"$1" "${!deps}"

View File

@ -1,7 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source "$DIR/../util.sh"
@ -24,7 +22,7 @@ TOOLCHAIN_BUILD_DEPS=(
libgmp-dev # for gdb
gperf # for proxygen
libssl-dev
libedit-dev libpcre3-dev automake bison # for swig
libedit-dev libpcre2-dev libpcre3-dev automake bison # for swig
)
TOOLCHAIN_RUN_DEPS=(
@ -40,7 +38,7 @@ TOOLCHAIN_RUN_DEPS=(
MEMGRAPH_BUILD_DEPS=(
git # source code control
make pkg-config # build system
make cmake pkg-config # build system
curl wget # for downloading libs
uuid-dev default-jre-headless # required by antlr
libreadline-dev # for memgraph console
@ -53,10 +51,21 @@ MEMGRAPH_BUILD_DEPS=(
libcurl4-openssl-dev # mg-requests
sbcl # for custom Lisp C++ preprocessing
doxygen graphviz # source documentation generators
mono-runtime mono-mcs zip unzip default-jdk-headless # for driver tests
dotnet-sdk-6.0 golang nodejs npm
mono-runtime mono-mcs zip unzip default-jdk-headless openjdk-17-jdk-headless custom-maven3.9.3 # for driver tests
dotnet-sdk-6.0 golang custom-golang1.18.9 nodejs npm # for driver tests
autoconf # for jemalloc code generation
libtool # for protobuf code generation
libsasl2-dev
)
MEMGRAPH_TEST_DEPS="${MEMGRAPH_BUILD_DEPS[*]}"
MEMGRAPH_RUN_DEPS=(
logrotate openssl python3 libseccomp2
)
NEW_DEPS=(
wget curl tar gzip
)
list() {
@ -64,12 +73,34 @@ list() {
}
check() {
check_all_dpkg "$1"
local missing=""
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
if [ ! -f "/opt/apache-maven-3.9.3/bin/mvn" ]; then
missing="$pkg $missing"
fi
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
if [ ! -f "/opt/go1.18.9/go/bin/go" ]; then
missing="$pkg $missing"
fi
continue
fi
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
missing="$pkg $missing"
fi
done
if [ "$missing" != "" ]; then
echo "MISSING PACKAGES: $missing"
exit 1
fi
}
install() {
cd "$DIR"
apt update
apt update -y
apt install -y wget
# If GitHub Actions runner is installed, append LANG to the environment.
# Python related tests doesn't work the LANG export.
if [ -d "/home/gh/actions-runner" ]; then
@ -77,8 +108,16 @@ install() {
else
echo "NOTE: export LANG=en_US.utf8"
fi
apt install -y wget
for pkg in $1; do
if [ "$pkg" == custom-maven3.9.3 ]; then
install_custom_maven "3.9.3"
continue
fi
if [ "$pkg" == custom-golang1.18.9 ]; then
install_custom_golang "1.18.9"
continue
fi
if [ "$pkg" == dotnet-sdk-6.0 ]; then
if ! dpkg -s dotnet-sdk-6.0 2>/dev/null >/dev/null; then
wget -nv https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
@ -88,6 +127,15 @@ install() {
fi
continue
fi
if [ "$pkg" == openjdk-17-jdk-headless ]; then
if ! dpkg -s "$pkg" 2>/dev/null >/dev/null; then
apt install -y "$pkg"
# The default Java version should be Java 11
update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
fi
continue
fi
apt install -y "$pkg"
done
}

5
environment/toolchain/.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
archives
build
output
*.tar.gz
tmp_build.sh

View File

@ -0,0 +1,48 @@
#!/bin/bash -e
# NOTE: Copy this under memgraph/environment/toolchain/vN/tmp_build.sh, edit and test.
pushd () { command pushd "$@" > /dev/null; }
popd () { command popd "$@" > /dev/null; }
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
TOOLCHAIN_VERSION=5
NAME=toolchain-v$TOOLCHAIN_VERSION
PREFIX=/opt/$NAME
function log_tool_name () {
echo ""
echo ""
echo "#### $1 ####"
echo ""
echo ""
}
# HERE: Remove/clear dependencies from a given toolchain.
mkdir -p archives && pushd archives
# HERE: Download dependencies here.
popd
mkdir -p build
pushd build
source $PREFIX/activate
export CC=$PREFIX/bin/clang
export CXX=$PREFIX/bin/clang++
export CFLAGS="$CFLAGS -fPIC"
export PATH=$PREFIX/bin:$PATH
export LD_LIBRARY_PATH=$PREFIX/lib64
COMMON_CMAKE_FLAGS="-DCMAKE_INSTALL_PREFIX=$PREFIX
-DCMAKE_PREFIX_PATH=$PREFIX
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_C_COMPILER=$CC
-DCMAKE_CXX_COMPILER=$CXX
-DBUILD_SHARED_LIBS=OFF
-DCMAKE_CXX_STANDARD=20
-DBUILD_TESTING=OFF
-DCMAKE_REQUIRED_INCLUDES=$PREFIX/include
-DCMAKE_POSITION_INDEPENDENT_CODE=ON"
# HERE: Add dependencies to test below.

View File

@ -7,7 +7,7 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../util.sh"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
# toolchain version
@ -30,10 +30,10 @@ LLVM_VERSION=11.0.0
SWIG_VERSION=4.0.2 # used only for LLVM compilation
# Check for the dependencies.
echo "ALL BUILD PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_BUILD_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_RUN_DEPS
echo "ALL BUILD PACKAGES: $($DIR/../../os/$DISTRO.sh list TOOLCHAIN_BUILD_DEPS)"
$DIR/../../os/$DISTRO.sh check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $($DIR/../../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)"
$DIR/../../os/$DISTRO.sh check TOOLCHAIN_RUN_DEPS
# check installation directory
NAME=toolchain-v$TOOLCHAIN_VERSION
@ -442,7 +442,7 @@ In order to be able to run all of these tools you should install the following
packages:
\`\`\`
$($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)
$($DIR/../../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)
\`\`\`
## Usage

View File

@ -7,7 +7,7 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../util.sh"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
# toolchain version
@ -31,10 +31,10 @@ LLVM_VERSION_LONG=12.0.1-rc4
SWIG_VERSION=4.0.2 # used only for LLVM compilation
# Check for the dependencies.
echo "ALL BUILD PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_BUILD_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_RUN_DEPS
echo "ALL BUILD PACKAGES: $($DIR/../../os/$DISTRO.sh list TOOLCHAIN_BUILD_DEPS)"
$DIR/../../os/$DISTRO.sh check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $($DIR/../../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)"
$DIR/../../os/$DISTRO.sh check TOOLCHAIN_RUN_DEPS
# check installation directory
NAME=toolchain-v$TOOLCHAIN_VERSION
@ -452,7 +452,7 @@ In order to be able to run all of these tools you should install the following
packages:
\`\`\`
$($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)
$($DIR/../../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)
\`\`\`
## Usage

View File

@ -7,9 +7,17 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CPUS=$( grep -c processor < /proc/cpuinfo )
cd "$DIR"
source "$DIR/../util.sh"
source "$DIR/../../util.sh"
DISTRO="$(operating_system)"
function log_tool_name () {
echo ""
echo ""
echo "#### $1 ####"
echo ""
echo ""
}
for_arm=false
if [[ "$#" -eq 1 ]]; then
if [[ "$1" == "--for-arm" ]]; then
@ -20,9 +28,11 @@ if [[ "$#" -eq 1 ]]; then
fi
fi
os="$1"
# toolchain version
TOOLCHAIN_STDCXX="${TOOLCHAIN_STDCXX:-libstdc++}"
if [[ "$TOOLCHAIN_STDCXX" != "libstdc++" && "$TOOLCHAIN_STDCXX" != "libc++" ]]; then
echo "Only GCC (libstdc++) or LLVM (libc++) C++ standard library implementations are supported."
exit 1
fi
TOOLCHAIN_VERSION=4
# package versions used
@ -41,11 +51,15 @@ CPPCHECK_VERSION=2.6
LLVM_VERSION=13.0.0
SWIG_VERSION=4.0.2 # used only for LLVM compilation
# Check for the dependencies.
echo "ALL BUILD PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_BUILD_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)"
$DIR/../os/$DISTRO.sh check TOOLCHAIN_RUN_DEPS
# Set the right operating system setup script.
ENV_SCRIPT="$DIR/../../os/$DISTRO.sh"
if [[ "$for_arm" = true ]]; then
ENV_SCRIPT="$DIR/../../os/$DISTRO-arm.sh"
fi
echo "ALL BUILD PACKAGES: $(${ENV_SCRIPT} list TOOLCHAIN_BUILD_DEPS)"
${ENV_SCRIPT} check TOOLCHAIN_BUILD_DEPS
echo "ALL RUN PACKAGES: $(${ENV_SCRIPT} list TOOLCHAIN_RUN_DEPS)"
${ENV_SCRIPT} check TOOLCHAIN_RUN_DEPS
# check installation directory
NAME=toolchain-v$TOOLCHAIN_VERSION
@ -99,6 +113,8 @@ if [ ! -f llvm-$LLVM_VERSION.src.tar.xz ]; then
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/clang-tools-extra-$LLVM_VERSION.src.tar.xz
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/compiler-rt-$LLVM_VERSION.src.tar.xz
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libunwind-$LLVM_VERSION.src.tar.xz
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libcxx-$LLVM_VERSION.src.tar.xz
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libcxxabi-$LLVM_VERSION.src.tar.xz
fi
if [ ! -f pahole-gdb-master.zip ]; then
wget https://github.com/PhilArmstrong/pahole-gdb/archive/master.zip -O pahole-gdb-master.zip
@ -156,6 +172,8 @@ if [ ! -f llvm-$LLVM_VERSION.src.tar.xz.sig ]; then
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/clang-tools-extra-$LLVM_VERSION.src.tar.xz.sig
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/compiler-rt-$LLVM_VERSION.src.tar.xz.sig
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libunwind-$LLVM_VERSION.src.tar.xz.sig
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libcxx-$LLVM_VERSION.src.tar.xz.sig
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-$LLVM_VERSION/libcxxabi-$LLVM_VERSION.src.tar.xz.sig
fi
# list of valid llvm gnupg keys: https://releases.llvm.org/download.html
$GPG --keyserver $KEYSERVER --recv-keys 0x474E22316ABF4785A88C6E8EA2C794A986419D8A
@ -165,6 +183,8 @@ $GPG --verify lld-$LLVM_VERSION.src.tar.xz.sig lld-$LLVM_VERSION.src.tar.xz
$GPG --verify clang-tools-extra-$LLVM_VERSION.src.tar.xz.sig clang-tools-extra-$LLVM_VERSION.src.tar.xz
$GPG --verify compiler-rt-$LLVM_VERSION.src.tar.xz.sig compiler-rt-$LLVM_VERSION.src.tar.xz
$GPG --verify libunwind-$LLVM_VERSION.src.tar.xz.sig libunwind-$LLVM_VERSION.src.tar.xz
$GPG --verify libcxx-$LLVM_VERSION.src.tar.xz.sig libcxx-$LLVM_VERSION.src.tar.xz
$GPG --verify libcxxabi-$LLVM_VERSION.src.tar.xz.sig libcxxabi-$LLVM_VERSION.src.tar.xz
popd
@ -172,7 +192,7 @@ popd
mkdir -p build
pushd build
# compile gcc
log_tool_name "GCC $GCC_VERSION"
if [ ! -f $PREFIX/bin/gcc ]; then
if [ -d gcc-$GCC_VERSION ]; then
rm -rf gcc-$GCC_VERSION
@ -263,7 +283,7 @@ fi
export PATH=$PREFIX/bin:$PATH
export LD_LIBRARY_PATH=$PREFIX/lib64
# compile binutils
log_tool_name "binutils $BINUTILS_VERSION"
if [ ! -f $PREFIX/bin/ld.gold ]; then
if [ -d binutils-$BINUTILS_VERSION ]; then
rm -rf binutils-$BINUTILS_VERSION
@ -327,7 +347,7 @@ if [ ! -f $PREFIX/bin/ld.gold ]; then
popd && popd
fi
# compile gdb
log_tool_name "GDB $GDB_VERSION"
if [ ! -f $PREFIX/bin/gdb ]; then
if [ -d gdb-$GDB_VERSION ]; then
rm -rf gdb-$GDB_VERSION
@ -363,6 +383,62 @@ if [ ! -f $PREFIX/bin/gdb ]; then
--without-babeltrace \
--enable-tui \
--with-python=python3
elif [[ "${DISTRO}" == fedora* ]]; then
# Remove readline, gdb does not compile
env \
CC=gcc \
CXX=g++ \
CFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security" \
CXXFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security" \
CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2 -fPIC" \
LDFLAGS="-Wl,-z,relro" \
PYTHON="" \
../configure \
--build=x86_64-linux-gnu \
--host=x86_64-linux-gnu \
--prefix=$PREFIX \
--disable-maintainer-mode \
--disable-dependency-tracking \
--disable-silent-rules \
--disable-gdbtk \
--disable-shared \
--without-guile \
--with-system-gdbinit=$PREFIX/etc/gdb/gdbinit \
--with-expat \
--with-system-zlib \
--with-lzma \
--with-babeltrace \
--with-intel-pt \
--enable-tui \
--with-python=python3
elif [[ "${DISTRO}" == "amzn-2" ]]; then
# Remove readline, gdb does not compile
env \
CC=gcc \
CXX=g++ \
CFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security" \
CXXFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security" \
CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2 -fPIC" \
LDFLAGS="-Wl,-z,relro" \
PYTHON="" \
../configure \
--build=x86_64-linux-gnu \
--host=x86_64-linux-gnu \
--prefix=$PREFIX \
--disable-maintainer-mode \
--disable-dependency-tracking \
--disable-silent-rules \
--disable-gdbtk \
--disable-shared \
--without-guile \
--with-system-gdbinit=$PREFIX/etc/gdb/gdbinit \
--with-expat \
--with-system-zlib \
--with-lzma \
--with-babeltrace \
--with-intel-pt \
--enable-tui \
--with-python=python3
else
# https://buildd.debian.org/status/fetch.php?pkg=gdb&arch=amd64&ver=8.2.1-2&stamp=1550831554&raw=0
env \
@ -398,13 +474,13 @@ if [ ! -f $PREFIX/bin/gdb ]; then
popd && popd
fi
# install pahole
log_tool_name "install pahole"
if [ ! -d $PREFIX/share/pahole-gdb ]; then
unzip ../archives/pahole-gdb-master.zip
mv pahole-gdb-master $PREFIX/share/pahole-gdb
fi
# setup system gdbinit
log_tool_name "setup system gdbinit"
if [ ! -f $PREFIX/etc/gdb/gdbinit ]; then
mkdir -p $PREFIX/etc/gdb
cat >$PREFIX/etc/gdb/gdbinit <<EOF
@ -430,7 +506,7 @@ end
EOF
fi
# compile cmake
log_tool_name "cmake $CMAKE_VERSION"
if [ ! -f $PREFIX/bin/cmake ]; then
if [ -d cmake-$CMAKE_VERSION ]; then
rm -rf cmake-$CMAKE_VERSION
@ -456,7 +532,7 @@ if [ ! -f $PREFIX/bin/cmake ]; then
popd && popd
fi
# compile cppcheck
log_tool_name "cppcheck $CPPCHECK_VERSION"
if [ ! -f $PREFIX/bin/cppcheck ]; then
if [ -d cppcheck-$CPPCHECK_VERSION ]; then
rm -rf cppcheck-$CPPCHECK_VERSION
@ -480,7 +556,7 @@ if [ ! -f $PREFIX/bin/cppcheck ]; then
popd
fi
# compile swig
log_tool_name "swig $SWIG_VERSION"
if [ ! -d swig-$SWIG_VERSION/install ]; then
if [ -d swig-$SWIG_VERSION ]; then
rm -rf swig-$SWIG_VERSION
@ -496,7 +572,7 @@ if [ ! -d swig-$SWIG_VERSION/install ]; then
popd && popd
fi
# compile llvm
log_tool_name "LLVM $LLVM_VERSION"
if [ ! -f $PREFIX/bin/clang ]; then
if [ -d llvm-$LLVM_VERSION ]; then
rm -rf llvm-$LLVM_VERSION
@ -513,8 +589,19 @@ if [ ! -f $PREFIX/bin/clang ]; then
mv compiler-rt-$LLVM_VERSION.src/ llvm-$LLVM_VERSION/projects/compiler-rt
tar -xvf ../archives/libunwind-$LLVM_VERSION.src.tar.xz
mv libunwind-$LLVM_VERSION.src/include/mach-o llvm-$LLVM_VERSION/tools/lld/include
# The following is required because of libc++
tar -xvf ../archives/libcxx-$LLVM_VERSION.src.tar.xz
mv libcxx-$LLVM_VERSION.src llvm-$LLVM_VERSION/projects/libcxx
tar -xvf ../archives/libcxxabi-$LLVM_VERSION.src.tar.xz
mv libcxxabi-$LLVM_VERSION.src llvm-$LLVM_VERSION/projects/libcxxabi
# NOTE: We moved part of the libunwind in one of the previous step.
rm -r libunwind-$LLVM_VERSION.src
tar -xvf ../archives/libunwind-$LLVM_VERSION.src.tar.xz
mv libunwind-$LLVM_VERSION.src llvm-$LLVM_VERSION/projects/libunwind
pushd llvm-$LLVM_VERSION
mkdir build && pushd build
mkdir -p build && pushd build
# activate swig
export PATH=$DIR/build/swig-$SWIG_VERSION/install/bin:$PATH
# influenced by: https://buildd.debian.org/status/fetch.php?pkg=llvm-toolchain-7&arch=amd64&ver=1%3A7.0.1%7E%2Brc2-1%7Eexp1&stamp=1541506173&raw=0
@ -567,7 +654,7 @@ In order to be able to run all of these tools you should install the following
packages:
\`\`\`
$($DIR/../os/$DISTRO.sh list TOOLCHAIN_RUN_DEPS)
$($DIR/../../os/$ENV_SCRIPT.sh list TOOLCHAIN_RUN_DEPS)
\`\`\`
## Usage
@ -624,6 +711,7 @@ export PS1="($NAME) \$PS1"
export LD_LIBRARY_PATH=$PREFIX/lib:$PREFIX/lib64
export CXXFLAGS=-isystem\ $PREFIX/include\ \$CXXFLAGS
export CFLAGS=-isystem\ $PREFIX/include\ \$CFLAGS
export VENV=$PREFIX
# disable root
function su () {
@ -675,7 +763,7 @@ PROXYGEN_SHA256=5360a8ccdfb2f5a6c7b3eed331ec7ab0e2c792d579c6fff499c85c516c11fe14
SNAPPY_SHA256=75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7
SNAPPY_VERSION=1.1.9
XZ_VERSION=5.2.5 # for LZMA
ZLIB_VERSION=1.2.12
ZLIB_VERSION=1.2.13
ZSTD_VERSION=1.5.0
WANGLE_SHA256=1002e9c32b6f4837f6a760016e3b3e22f3509880ef3eaad191c80dc92655f23f
@ -820,7 +908,11 @@ source $PREFIX/activate
export CC=$PREFIX/bin/clang
export CXX=$PREFIX/bin/clang++
export CFLAGS="$CFLAGS -fPIC"
export CXXFLAGS="$CXXFLAGS -fPIC"
if [ "$TOOLCHAIN_STDCXX" = "libstdc++" ]; then
export CXXFLAGS="$CXXFLAGS -fPIC"
else
export CXXFLAGS="$CXXFLAGS -fPIC -stdlib=libc++"
fi
COMMON_CMAKE_FLAGS="-DCMAKE_INSTALL_PREFIX=$PREFIX
-DCMAKE_PREFIX_PATH=$PREFIX
-DCMAKE_BUILD_TYPE=Release
@ -834,7 +926,7 @@ COMMON_CMAKE_FLAGS="-DCMAKE_INSTALL_PREFIX=$PREFIX
COMMON_CONFIGURE_FLAGS="--enable-shared=no --prefix=$PREFIX"
COMMON_MAKE_INSTALL_FLAGS="-j$CPUS BUILD_SHARED=no PREFIX=$PREFIX install"
# install bzip2
log_tool_name "bzip2 $BZIP2_VERSION"
if [ ! -f $PREFIX/include/bzlib.h ]; then
if [ -d bzip2-$BZIP2_VERSION ]; then
rm -rf bzip2-$BZIP2_VERSION
@ -845,7 +937,7 @@ if [ ! -f $PREFIX/include/bzlib.h ]; then
popd
fi
# install fmt
log_tool_name "fmt $FMT_VERSION"
if [ ! -d $PREFIX/include/fmt ]; then
if [ -d fmt-$FMT_VERSION ]; then
rm -rf fmt-$FMT_VERSION
@ -858,7 +950,7 @@ if [ ! -d $PREFIX/include/fmt ]; then
popd && popd
fi
# install lz4
log_tool_name "lz4 $LZ4_VERSION"
if [ ! -f $PREFIX/include/lz4.h ]; then
if [ -d lz4-$LZ4_VERSION ]; then
rm -rf lz4-$LZ4_VERSION
@ -869,7 +961,7 @@ if [ ! -f $PREFIX/include/lz4.h ]; then
popd
fi
# install xz
log_tool_name "xz $XZ_VERSION"
if [ ! -f $PREFIX/include/lzma.h ]; then
if [ -d xz-$XZ_VERSION ]; then
rm -rf xz-$XZ_VERSION
@ -881,7 +973,7 @@ if [ ! -f $PREFIX/include/lzma.h ]; then
popd
fi
# install zlib
log_tool_name "zlib $ZLIB_VERSION"
if [ ! -f $PREFIX/include/zlib.h ]; then
if [ -d zlib-$ZLIB_VERSION ]; then
rm -rf zlib-$ZLIB_VERSION
@ -895,7 +987,7 @@ if [ ! -f $PREFIX/include/zlib.h ]; then
popd && popd
fi
# install zstd
log_tool_name "zstd $ZSTD_VERSION"
if [ ! -f $PREFIX/include/zstd.h ]; then
if [ -d zstd-$ZSTD_VERSION ]; then
rm -rf zstd-$ZSTD_VERSION
@ -910,7 +1002,8 @@ if [ ! -f $PREFIX/include/zstd.h ]; then
popd && popd
fi
#install jemalloc
# TODO(gitbuda): Freeze jmalloc version.
log_tool_name "jmalloc"
if [ ! -d $PREFIX/include/jemalloc ]; then
if [ -d jemalloc ]; then
rm -rf jemalloc
@ -927,7 +1020,7 @@ if [ ! -d $PREFIX/include/jemalloc ]; then
popd
fi
# install boost
log_tool_name "BOOST $BOOST_VERSION"
if [ ! -d $PREFIX/include/boost ]; then
if [ -d boost_$BOOST_VERSION_UNDERSCORES ]; then
rm -rf boost_$BOOST_VERSION_UNDERSCORES
@ -935,15 +1028,24 @@ if [ ! -d $PREFIX/include/boost ]; then
tar -xzf ../archives/boost_$BOOST_VERSION_UNDERSCORES.tar.gz
pushd boost_$BOOST_VERSION_UNDERSCORES
./bootstrap.sh --prefix=$PREFIX --with-toolset=clang --with-python=python3 --without-icu
./b2 toolset=clang -j$CPUS install variant=release link=static cxxstd=20 --disable-icu \
-sZLIB_SOURCE="$PREFIX" -sZLIB_INCLUDE="$PREFIX/include" -sZLIB_LIBPATH="$PREFIX/lib" \
-sBZIP2_SOURCE="$PREFIX" -sBZIP2_INCLUDE="$PREFIX/include" -sBZIP2_LIBPATH="$PREFIX/lib" \
-sLZMA_SOURCE="$PREFIX" -sLZMA_INCLUDE="$PREFIX/include" -sLZMA_LIBPATH="$PREFIX/lib" \
-sZSTD_SOURCE="$PREFIX" -sZSTD_INCLUDE="$PREFIX/include" -sZSTD_LIBPATH="$PREFIX/lib"
if [ "$TOOLCHAIN_STDCXX" = "libstdc++" ]; then
./b2 toolset=clang -j$CPUS install variant=release link=static cxxstd=20 --disable-icu \
-sZLIB_SOURCE="$PREFIX" -sZLIB_INCLUDE="$PREFIX/include" -sZLIB_LIBPATH="$PREFIX/lib" \
-sBZIP2_SOURCE="$PREFIX" -sBZIP2_INCLUDE="$PREFIX/include" -sBZIP2_LIBPATH="$PREFIX/lib" \
-sLZMA_SOURCE="$PREFIX" -sLZMA_INCLUDE="$PREFIX/include" -sLZMA_LIBPATH="$PREFIX/lib" \
-sZSTD_SOURCE="$PREFIX" -sZSTD_INCLUDE="$PREFIX/include" -sZSTD_LIBPATH="$PREFIX/lib"
else
./b2 toolset=clang -j$CPUS install variant=release link=static cxxstd=20 --disable-icu \
cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++" \
-sZLIB_SOURCE="$PREFIX" -sZLIB_INCLUDE="$PREFIX/include" -sZLIB_LIBPATH="$PREFIX/lib" \
-sBZIP2_SOURCE="$PREFIX" -sBZIP2_INCLUDE="$PREFIX/include" -sBZIP2_LIBPATH="$PREFIX/lib" \
-sLZMA_SOURCE="$PREFIX" -sLZMA_INCLUDE="$PREFIX/include" -sLZMA_LIBPATH="$PREFIX/lib" \
-sZSTD_SOURCE="$PREFIX" -sZSTD_INCLUDE="$PREFIX/include" -sZSTD_LIBPATH="$PREFIX/lib"
fi
popd
fi
# install double-conversion
log_tool_name "double-conversion $DOUBLE_CONVERSION_VERSION"
if [ ! -d $PREFIX/include/double-conversion ]; then
if [ -d double-conversion-$DOUBLE_CONVERSION_VERSION ]; then
rm -rf double-conversion-$DOUBLE_CONVERSION_VERSION
@ -958,7 +1060,8 @@ if [ ! -d $PREFIX/include/double-conversion ]; then
popd && popd
fi
# install gflags
# TODO(gitbuda): Freeze gflags version.
log_tool_name "gflags"
if [ ! -d $PREFIX/include/gflags ]; then
if [ -d gflags ]; then
rm -rf gflags
@ -977,7 +1080,7 @@ if [ ! -d $PREFIX/include/gflags ]; then
popd && popd
fi
# install libunwind
log_tool_name "libunwind $LIBUNWIND_VERSION"
if [ ! -f $PREFIX/include/libunwind.h ]; then
if [ -d libunwind-$LIBUNWIND_VERSION ]; then
rm -rf libunwind-$LIBUNWIND_VERSION
@ -990,7 +1093,7 @@ if [ ! -f $PREFIX/include/libunwind.h ]; then
popd
fi
# install glog
log_tool_name "glog $GLOG_VERSION"
if [ ! -d $PREFIX/include/glog ]; then
if [ -d glog-$GLOG_VERSION ]; then
rm -rf glog-$GLOG_VERSION
@ -1004,7 +1107,7 @@ if [ ! -d $PREFIX/include/glog ]; then
popd && popd
fi
# install libevent
log_tool_name "libevent $LIBEVENT_VERSION"
if [ ! -d $PREFIX/include/event2 ]; then
if [ -d libevent-$LIBEVENT_VERSION ]; then
rm -rf libevent-$LIBEVENT_VERSION
@ -1023,7 +1126,7 @@ if [ ! -d $PREFIX/include/event2 ]; then
popd && popd
fi
# install snappy
log_tool_name "snappy $SNAPPY_VERSION"
if [ ! -f $PREFIX/include/snappy.h ]; then
if [ -d snappy-$SNAPPY_VERSION ]; then
rm -rf snappy-$SNAPPY_VERSION
@ -1041,7 +1144,7 @@ if [ ! -f $PREFIX/include/snappy.h ]; then
popd && popd
fi
# install libsodium
log_tool_name "libsodium $LIBSODIUM_VERSION"
if [ ! -f $PREFIX/include/sodium.h ]; then
if [ -d libsodium-$LIBSODIUM_VERSION ]; then
rm -rf libsodium-$LIBSODIUM_VERSION
@ -1053,7 +1156,7 @@ if [ ! -f $PREFIX/include/sodium.h ]; then
popd
fi
# install libaio
log_tool_name "libaio $LIBAIO_VERSION"
if [ ! -f $PREFIX/include/libaio.h ]; then
if [ -d libaio-$LIBAIO_VERSION ]; then
rm -rf libaio-$LIBAIO_VERSION
@ -1064,114 +1167,121 @@ if [ ! -f $PREFIX/include/libaio.h ]; then
popd
fi
# install folly
if [ ! -d $PREFIX/include/folly ]; then
if [ -d folly-$FBLIBS_VERSION ]; then
rm -rf folly-$FBLIBS_VERSION
if [[ "${DISTRO}" != "amzn-2" ]]; then
log_tool_name "folly $FBLIBS_VERSION"
if [ ! -d $PREFIX/include/folly ]; then
if [ -d folly-$FBLIBS_VERSION ]; then
rm -rf folly-$FBLIBS_VERSION
fi
mkdir folly-$FBLIBS_VERSION
tar -xzf ../archives/folly-$FBLIBS_VERSION.tar.gz -C folly-$FBLIBS_VERSION
pushd folly-$FBLIBS_VERSION
patch -p1 < ../../folly.patch
# build is used by facebook builder
mkdir _build
pushd _build
cmake .. $COMMON_CMAKE_FLAGS \
-DBOOST_LINK_STATIC=ON \
-DBUILD_TESTS=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DCXX_STD="c++20"
make -j$CPUS install
popd && popd
fi
mkdir folly-$FBLIBS_VERSION
tar -xzf ../archives/folly-$FBLIBS_VERSION.tar.gz -C folly-$FBLIBS_VERSION
pushd folly-$FBLIBS_VERSION
patch -p1 < ../../folly.patch
# build is used by facebook builder
mkdir _build
pushd _build
cmake .. $COMMON_CMAKE_FLAGS \
-DBOOST_LINK_STATIC=ON \
-DBUILD_TESTS=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DCXX_STD="c++20"
make -j$CPUS install
popd && popd
fi
# install fizz
if [ ! -d $PREFIX/include/fizz ]; then
if [ -d fizz-$FBLIBS_VERSION ]; then
rm -rf fizz-$FBLIBS_VERSION
log_tool_name "fizz $FBLIBS_VERSION"
if [ ! -d $PREFIX/include/fizz ]; then
if [ -d fizz-$FBLIBS_VERSION ]; then
rm -rf fizz-$FBLIBS_VERSION
fi
mkdir fizz-$FBLIBS_VERSION
tar -xzf ../archives/fizz-$FBLIBS_VERSION.tar.gz -C fizz-$FBLIBS_VERSION
pushd fizz-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
cmake ../fizz $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF
make -j$CPUS install
popd && popd
fi
mkdir fizz-$FBLIBS_VERSION
tar -xzf ../archives/fizz-$FBLIBS_VERSION.tar.gz -C fizz-$FBLIBS_VERSION
pushd fizz-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
cmake ../fizz $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF
make -j$CPUS install
popd && popd
fi
# install wangle
if [ ! -d $PREFIX/include/wangle ]; then
if [ -d wangle-$FBLIBS_VERSION ]; then
rm -rf wangle-$FBLIBS_VERSION
log_tool_name "wangle FBLIBS_VERSION"
if [ ! -d $PREFIX/include/wangle ]; then
if [ -d wangle-$FBLIBS_VERSION ]; then
rm -rf wangle-$FBLIBS_VERSION
fi
mkdir wangle-$FBLIBS_VERSION
tar -xzf ../archives/wangle-$FBLIBS_VERSION.tar.gz -C wangle-$FBLIBS_VERSION
pushd wangle-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
cmake ../wangle $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF
make -j$CPUS install
popd && popd
fi
mkdir wangle-$FBLIBS_VERSION
tar -xzf ../archives/wangle-$FBLIBS_VERSION.tar.gz -C wangle-$FBLIBS_VERSION
pushd wangle-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
cmake ../wangle $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_EXAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF
make -j$CPUS install
popd && popd
fi
# install proxygen
if [ ! -d $PREFIX/include/proxygen ]; then
if [ -d proxygen-$FBLIBS_VERSION ]; then
rm -rf proxygen-$FBLIBS_VERSION
log_tool_name "proxygen $FBLIBS_VERSION"
if [ ! -d $PREFIX/include/proxygen ]; then
if [ -d proxygen-$FBLIBS_VERSION ]; then
rm -rf proxygen-$FBLIBS_VERSION
fi
mkdir proxygen-$FBLIBS_VERSION
tar -xzf ../archives/proxygen-$FBLIBS_VERSION.tar.gz -C proxygen-$FBLIBS_VERSION
pushd proxygen-$FBLIBS_VERSION
patch -p1 < ../../proxygen.patch
# build is used by facebook builder
mkdir _build
pushd _build
cmake .. $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_SAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DBUILD_QUIC=OFF
make -j$CPUS install
popd && popd
fi
mkdir proxygen-$FBLIBS_VERSION
tar -xzf ../archives/proxygen-$FBLIBS_VERSION.tar.gz -C proxygen-$FBLIBS_VERSION
pushd proxygen-$FBLIBS_VERSION
patch -p1 < ../../proxygen.patch
# build is used by facebook builder
mkdir _build
pushd _build
cmake .. $COMMON_CMAKE_FLAGS \
-DBUILD_TESTS=OFF \
-DBUILD_SAMPLES=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DBUILD_QUIC=OFF
make -j$CPUS install
popd && popd
fi
# install flex
if [ ! -f $PREFIX/include/FlexLexer.h ]; then
if [ -d flex-$FLEX_VERSION ]; then
rm -rf flex-$FLEX_VERSION
log_tool_name "flex $FBLIBS_VERSION"
if [ ! -f $PREFIX/include/FlexLexer.h ]; then
if [ -d flex-$FLEX_VERSION ]; then
rm -rf flex-$FLEX_VERSION
fi
tar -xzf ../archives/flex-$FLEX_VERSION.tar.gz
pushd flex-$FLEX_VERSION
./configure $COMMON_CONFIGURE_FLAGS
make -j$CPUS install
popd
fi
tar -xzf ../archives/flex-$FLEX_VERSION.tar.gz
pushd flex-$FLEX_VERSION
./configure $COMMON_CONFIGURE_FLAGS
make -j$CPUS install
popd
fi
# install fbthrift
if [ ! -d $PREFIX/include/thrift ]; then
if [ -d fbthrift-$FBLIBS_VERSION ]; then
rm -rf fbthrift-$FBLIBS_VERSION
log_tool_name "fbthrift $FBLIBS_VERSION"
if [ ! -d $PREFIX/include/thrift ]; then
if [ -d fbthrift-$FBLIBS_VERSION ]; then
rm -rf fbthrift-$FBLIBS_VERSION
fi
git clone --depth 1 --branch v$FBLIBS_VERSION https://github.com/facebook/fbthrift.git fbthrift-$FBLIBS_VERSION
pushd fbthrift-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
if [ "$TOOLCHAIN_STDCXX" = "libstdc++" ]; then
CMAKE_CXX_FLAGS="-fsized-deallocation"
else
CMAKE_CXX_FLAGS="-fsized-deallocation -stdlib=libc++"
fi
cmake .. $COMMON_CMAKE_FLAGS \
-Denable_tests=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DCMAKE_CXX_FLAGS="$CMAKE_CXX_FLAGS"
make -j$CPUS install
popd
fi
git clone --depth 1 --branch v$FBLIBS_VERSION https://github.com/facebook/fbthrift.git fbthrift-$FBLIBS_VERSION
pushd fbthrift-$FBLIBS_VERSION
# build is used by facebook builder
mkdir _build
pushd _build
cmake .. $COMMON_CMAKE_FLAGS \
-Denable_tests=OFF \
-DGFLAGS_NOTHREADS=OFF \
-DCMAKE_CXX_FLAGS=-fsized-deallocation
make -j$CPUS install
popd
fi
popd
@ -1179,7 +1289,7 @@ popd
# create toolchain archive
if [ ! -f $NAME-binaries-$DISTRO.tar.gz ]; then
DISTRO_FULL_NAME=${DISTRO}
if [[ "${DISTRO}" == centos* ]]; then
if [[ "${DISTRO}" == centos* ]] || [[ "${DISTRO}" == fedora* ]]; then
if [[ "$for_arm" = "true" ]]; then
DISTRO_FULL_NAME="$DISTRO_FULL_NAME-aarch64"
else
@ -1192,7 +1302,12 @@ if [ ! -f $NAME-binaries-$DISTRO.tar.gz ]; then
DISTRO_FULL_NAME="$DISTRO_FULL_NAME-amd64"
fi
fi
if [ "$TOOLCHAIN_STDCXX" = "libstdc++" ]; then
# Pass because infra scripts assume there is not C++ standard lib in the name.
echo "NOTE: Not adding anything to the archive name that GCC C++ standard lib is used."
else
DISTRO_FULL_NAME="$DISTRO_FULL_NAME-libc++"
fi
tar --owner=root --group=root -cpvzf $NAME-binaries-$DISTRO_FULL_NAME.tar.gz -C /opt $NAME
fi

View File

@ -4,7 +4,7 @@ diff -ur a/CMakeLists.txt b/CMakeLists.txt
@@ -52,9 +52,9 @@
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHs-c-")
add_definitions(-D_HAS_EXCEPTIONS=0)
- # Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
@ -17,7 +17,7 @@ diff -ur a/CMakeLists.txt b/CMakeLists.txt
@@ -77,9 +77,9 @@
string(REGEX REPLACE "-fexceptions" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
- # Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
@ -25,5 +25,5 @@ diff -ur a/CMakeLists.txt b/CMakeLists.txt
+ # string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
+ # set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

1256
environment/toolchain/v5/build.sh Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,42 @@
#!/bin/bash -ex
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
PREFIX=/opt/toolchain-v5
# NOTE: Often times when versions in the build script are changes, something
# doesn't work. To avoid rebuild of the whole toolchain but rebuild specific
# lib from 0, just comment specific line under this cript and run it. Don't
# forget to comment back to avoid unnecessary deletes next time your run this
# cript.
# rm -rf "$DIR/build"
# rm -rf "$DIR/output"
# rm -rf "$PREFIX/bin/gcc"
# rm -rf "$PREFIX/bin/ld.gold"
# rm -rf "$PREFIX/bin/gdb"
# rm -rf "$PREFIX/bin/cmake"
# rm -rf "$PREFIX/bin/clang"
# rm -rf "$PREFIX/include/bzlib.h"
# rm -rf "$PREFIX/include/fmt"
# rm -rf "$PREFIX/include/lz4.h"
# rm -rf "$PREFIX/include/lzma.h"
# rm -rf "$PREFIX/include/zlib.h"
# rm -rf "$PREFIX/include/zstd.h"
# rm -rf "$PREFIX/include/jemalloc"
# rm -rf "$PREFIX/include/boost"
# rm -rf "$PREFIX/include/double-conversion"
# rm -rf "$PREFIX/include/gflags"
# rm -rf "$PREFIX/include/libunwind.h"
# rm -rf "$PREFIX/include/glog"
# rm -rf "$PREFIX/include/event2"
# rm -rf "$PREFIX/include/sodium.h"
# rm -rf "$PREFIX/include/libaio.h"
# rm -rf "$PREFIX/include/FlexLexer.h"
# rm -rf "$PREFIX/include/snappy.h"
# rm -rf "$PREFIX/include/fizz"
# rm -rf "$PREFIX/include/folly"
# rm -rf "$PREFIX/include/proxygen"
# rm -rf "$PREFIX/include/wangle"
# rm -rf "$PREFIX/include/thrift"
# rm -rf "$PREFIX"

View File

@ -0,0 +1,41 @@
diff -ur a/folly/CMakeLists.txt b/folly/CMakeLists.txt
--- a/folly/CMakeLists.txt 2021-12-12 23:10:42.000000000 +0100
+++ b/folly/CMakeLists.txt 2022-02-03 15:19:41.349693134 +0100
@@ -28,7 +28,6 @@
)
add_subdirectory(experimental/exception_tracer)
-add_subdirectory(logging/example)
if (PYTHON_EXTENSIONS)
# Create tree of symbolic links in structure required for successful
diff -ur a/folly/experimental/exception_tracer/ExceptionTracerLib.cpp b/folly/experimental/exception_tracer/ExceptionTracerLib.cpp
--- a/folly/experimental/exception_tracer/ExceptionTracerLib.cpp 2021-12-12 23:10:42.000000000 +0100
+++ b/folly/experimental/exception_tracer/ExceptionTracerLib.cpp 2022-02-03 15:19:11.003368891 +0100
@@ -96,6 +96,7 @@
#define __builtin_unreachable()
#endif
+#if 0
namespace __cxxabiv1 {
void __cxa_throw(
@@ -154,5 +155,5 @@
}
} // namespace std
-
+#endif
#endif // defined(__GLIBCXX__)
diff -ur a/folly/Portability.h b/folly/Portability.h
--- a/folly/Portability.h 2021-12-12 23:10:42.000000000 +0100
+++ b/folly/Portability.h 2022-02-03 15:19:11.003368891 +0100
@@ -566,7 +566,7 @@
#define FOLLY_HAS_COROUTINES 0
#elif (__cpp_coroutines >= 201703L || __cpp_impl_coroutine >= 201902L) && \
(__has_include(<coroutine>) || __has_include(<experimental/coroutine>))
-#define FOLLY_HAS_COROUTINES 1
+#define FOLLY_HAS_COROUTINES 0
// This is mainly to workaround bugs triggered by LTO, when stack allocated
// variables in await_suspend end up on a coroutine frame.
#define FOLLY_CORO_AWAIT_SUSPEND_NONTRIVIAL_ATTRIBUTES FOLLY_NOINLINE

View File

@ -0,0 +1,26 @@
diff --git a/folly/CMakeLists.txt b/folly/CMakeLists.txt
index e0e16df..471131e 100644
--- a/folly/CMakeLists.txt
+++ b/folly/CMakeLists.txt
@@ -28,7 +28,7 @@ install(
)
add_subdirectory(experimental/exception_tracer)
-add_subdirectory(logging/example)
+# add_subdirectory(logging/example)
if (PYTHON_EXTENSIONS)
# Create tree of symbolic links in structure required for successful
diff --git a/folly/Portability.h b/folly/Portability.h
index 365ef1b..42d24b8 100644
--- a/folly/Portability.h
+++ b/folly/Portability.h
@@ -560,7 +560,7 @@ constexpr auto kCpplibVer = 0;
(defined(__cpp_coroutines) && __cpp_coroutines >= 201703L) || \
(defined(__cpp_impl_coroutine) && __cpp_impl_coroutine >= 201902L)) && \
(__has_include(<coroutine>) || __has_include(<experimental/coroutine>))
-#define FOLLY_HAS_COROUTINES 1
+#define FOLLY_HAS_COROUTINES 0
// This is mainly to workaround bugs triggered by LTO, when stack allocated
// variables in await_suspend end up on a coroutine frame.
#define FOLLY_CORO_AWAIT_SUSPEND_NONTRIVIAL_ATTRIBUTES FOLLY_NOINLINE

View File

@ -0,0 +1,29 @@
diff -ur a/CMakeLists.txt b/CMakeLists.txt
--- a/CMakeLists.txt 2021-05-05 00:53:34.000000000 +0200
+++ b/CMakeLists.txt 2022-01-27 17:18:34.758302398 +0100
@@ -52,9 +52,9 @@
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHs-c-")
add_definitions(-D_HAS_EXCEPTIONS=0)
- # Disable RTTI.
- string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
+ # # Disable RTTI.
+ # string(REGEX REPLACE "/GR" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
+ # set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /GR-")
else(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Use -Wall for clang and gcc.
if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
@@ -77,9 +77,9 @@
string(REGEX REPLACE "-fexceptions" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
- # Disable RTTI.
- string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
+ # # Disable RTTI.
+ # string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
+ # set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")
endif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to make

View File

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBEzEOZIBEACxg/IuXERlDB48JBWmF4NxNUuuup1IhJAJyFGFSKh3OGAO2Ard
sNuRLjANsFXA7m7P5eTFcG+BoHHuAVYmKnI3PPZtHVLnUt4pGItPczQZ2BE1WpcI
ayjGTBJeKItX3Npqg9D/odO9WWS1i3FQPVdrLn0YH37/BA66jeMQCRo7g7GLpaNf
IrvYGsqTbxCwsmA37rpE7oyU4Yrf74HT091WBsRIoq/MelhbxTDMR8eu/dUGZQVc
Kj3lN55RepwWwUUKyqarY0zMt4HkFJ7v7yRL+Cvzy92Ouv4Wf2FlhNtEs5LE4Tax
W0PO5AEmUoKjX87SezQK0f652018b4u6Ex52cY7p+n5TII/UyoowH6+tY8UHo9yb
fStrqgNE/mY2bhA6+AwCaOUGsFzVVPTbjtxL3HacUP/jlA1h78V8VTvTs5d55iG7
jSqR9o05wje8rwNiXXK0xtiJahyNzL97Kn/DgPSqPIi45G+8nxWSPFM5eunBKRl9
vAnsvwrdPRsR6YR3uMHTuVhQX9/CY891MHkaZJ6wydWtKt3yQwJLYqwo5d4DwnUX
CduUwSKv+6RmtWI5ZmTQYOcBRcZyGKml9X9Q8iSbm6cnpFXmLrNQwCJN+D3SiYGc
MtbltZo0ysPMa6Xj5xFaYqWk/BI4iLb2Gs+ByGo/+a0Eq4XYBMOpitNniQARAQAB
tCdMYXNzZSBDb2xsaW4gPGxhc3NlLmNvbGxpbkB0dWthYW5pLm9yZz6JAlEEEwEK
ADsCGwMCHgECF4AECwkIBwMVCggFFgIDAQAWIQQ2kMJAzlG0Zw0wrRw47nV9aRhG
IAUCYEt9dQUJFxeR4wAKCRA47nV9aRhGIBNDEACxD6vJ+enZwe3IgkJh5JtLsC9b
MWCQRlPW1EVMsg96Cb5Rtron1eN1pp1TlzENJu1/C7C/VEsr9WwOPg26Men7fNf/
O21QM9IBWd/uB0Pu333WqKh92ESS5x9ST9DrG39nVGSPkQQBMuia72VrA+crPnwT
/h/u1IN6/sff5VDIU24rUiqW2Npy733dANruj7Ny0scRXVPltnVdhqwPHt6qNjC1
t+/cCnwHgW1BR1RYXBPpB42z/m29dL9rPrG0YPGWs2Bc+EATUICfEE6eIvwfciue
IJTjKT9Y9DrogJC2AYFhjC7N04OKdCB2hFs4BjexJwr4X0GJO7LhFl03c951AsIE
GHwrucRPB5bo2vmvQ8IvZn7CmtdUJzXv9JlyU6p+MIK1pz7TK6GgSOSffQIXZn6e
nUPtm9mEwuncOfmW8/ODYPs1gCWYgyiFJx8h7eEu+M4MxHSFBs7MwXf/Ae2fSp+M
P/p198qB8fC5oVBnF95qb0Qi0uc1D+Gb+gpBF+ymMb+s/VBOR3QWiym7AzBrJ62g
UnbC9jMLGnSRI+7p7raUfMTgXr5/oQoBw7ExJVltSSRrim2YH/t4CV47mO6dR9J3
1RtsTFIRNhz+07XPsETcuCV/dgqeC8fOFLt9MY17Sufhb1DcGy4urZBOIhXcpTV7
vHVj5IYH5nYOT49NRYkCOAQTAQIAIgUCTMQ5kgIbAwYLCQgHAwIGFQgCCQoLBBYC
AwECHgECF4AACgkQOO51fWkYRiAg4A/7BXKwoRaXrMbMPOW7vuVF7c2IKB2Yqzn1
vLBCwuEHkqY237lDcXY4/5LR+1gcZ3Duw1n/BRSm0FBdvyX/JTWiWNSDUkKAO/0l
T2Tg44YLrDT3bzwu8dbU9xQt6kH+SCOHvv5Oe4k79l5mro6fF3H1M0bN63x/YoFY
ojy09D7/JptY82oR4f/VdKnfZLJcCViCb0wp8SD2NkDAudKg+K+7PD8HlTWklQQg
TZdRXxVZKIJeU42aJDqnRbAhJd64YHyClhqut9F5LUmiP5qfLfNhkKDhNOwk2Blr
BGBJkSd7wPyzcX4Mun/L6YspHjbeVMt9TD7HQlo+OOd2OjAHCx6pqwkXnzeLPEaE
cPdQ1SHgrBViAxX3DNPubLP0Knw8XwFu96EuhHZgexE1W7bB4LFsJyXAc5k1PqPD
CLsAauxmvI2OfI7opG/8wyxDvNgoPjG8fZNAgY0REqPC0JnTXChH31IxUmhNotH8
tD3DDTZOHw05n5MwwUrEE9xiETVDfFQcMLfxZ9KLz+BC2g1t5LYublRgnCMNJzFg
sNUMM02CphABzl/LCLnumr0eyQQ/weV4twEhLwSDmqLYHL0EdYW0Y3CnnU9vmYxQ
cXKbstS71sEJJYBBmSBbf9GxkOY8BRNtwVwY0kPgxv1WqdVBiAFvfB+pyAsrax9B
3UeB7ZSwRD6JAhwEEAEKAAYFAlS25GwACgkQlbYYGy0z6ew92Q//ZA9/6piQtoW4
PwP/1DtWGyKU8hwR+9FG669iPk/dAG+yoEJtFMOUpg/FUFmCX8Bc4oEHsCVyLxKt
DcCVUIRcYNSFi5hTZaBEbwsOlDT37gtlfIIu34hhHRccKaLnN/N9gNMNw8wGh9xg
Q/KtxZwcbk/bZIlDkKTJkFBRAekdEGAFDWb/AZOy+LQxS8ZAh1eWkfV0i8opmK9k
gPXtLE0WSsqtYyGs58z+BFE9NH3tEUwK6jSvtuLwQl4UrICNbKthcpb8WwH6UXzb
q3QNSYVOpf/cqRdBJA6bvb/ku/xyKVL08lGmxD9v1b137R7mafDAFPTsvH2Mt/0V
YuhtWav3r1Bl9QksDxt2DTS8wiWDUBetGqOVdcw7vBrXPEWDNBmxeJXsiJ7zJlR+
9wrJOm6RV2+l1IPxu96EaPS+kTNBijKrhxb67bww8BTEWTd0wcdJmgWRkM8SIstp
IKqd0L2TFYph2/NtrBhRg+DIEPJPpSTGsUMcCEXCZPQ+cIdlQKsWpk0tZ62DlvEl
r7E+wgUSQolRfx5KrpZifiS2zQlhzdXv28CJhsVbLyw5fUAWUKIH/dCo5NKsNLk2
Lc5DH9VWnFgxAAtW290FqeK/4ulMq7Vs1dQSwyHM2Ni3QqqeaiOrh8gbSY5CMLFN
Y3HYRwuTYPa3AobsozCzBj0Zdf/6AFe5Ag0ETMQ5kgEQAL/FwKdjxgPxtSpgq1SM
zgZtTTyLqhgGD3NZfadHWHYRIL38NDV3JeTA79Y2zj2dj7KQPDT+0aqeizTV2E3j
P3iCQ53VOT4consBaQAgKexpptnS+T1DobtICFJ0GGzf0HRj6KO2zSOuOitWPWlU
wbvX7M0LLI2+hqlx0jTPqbJFZ/Za6KTtbS6xdCPVUpUqYZQpokEZcwQmUp8Q+lGo
JD2sNYCZyap63X/aAOgCGr2RXYddOH5e8vGzGW+mwtCv+WQ9Ay35mGqI5MqkbZd1
Qbuv2b1647E/QEEucfRHVbJVKGGPpFMUJtcItyyIt5jo+r9CCL4Cs47dF/9/RNwu
NvpvHXUyqMBQdWNZRMx4k/NGD/WviPi9m6mIMui6rOQsSOaqYdcUX4Nq2Orr3Oaz
2JPQdUfeI23iot1vK8hxvUCQTV3HfJghizN6spVl0yQOKBiE8miJRgrjHilH3hTb
xoo42xDkNAq+CQo3QAm1ibDxKCDq0RcWPjcCRAN/Q5MmpcodpdKkzV0yGIS4g7s5
frVrgV/kox2r4/Yxsr8K909+4H82AjTKGX/BmsQFCTAqBk6p7I0zxjIqJ/w33TZB
Q0Pn4r3WIlUPafzY6a9/LAvN1fHRxf9SpCByJsszD03Qu5f5TB8gthsdnVmTo7jj
iordEKMtw2aEMLzdWWTQ/TNVABEBAAGJAjwEGAEKACYCGwwWIQQ2kMJAzlG0Zw0w
rRw47nV9aRhGIAUCYEt9YAUJFxeRzgAKCRA47nV9aRhGIMLtD/9HuKM4pngImcuz
YwzQmdv4j26YYyh4jVsKEmVWTiRcehEgUIlrWkCu3qzd5NK+RetS7kJ8MPnzEUfj
YbpdC6yrF6n1mSrZZ4VJMkV2ev37bIgXM+Wp1mCAGbjNxQnjn9RabT/gjIqmGuRn
AP7RsSeOSuO/gO9h2Pteciz23ussTilB+8cTooQEQQZe6Kv/zukvL+ccSehLHsZ7
qVfRUAmtt8nFkXXE+s8jfLfhqstaI2/RJu5witaPcXM8Mnz2E95aASAbZy0eQot9
0Pvf07n9yuC3tueTvzvlXx3h5U3yT44tIOmzANIQjay1TGdm+RBJ2ZYyhyLawlZ2
NVUXXSp4QZZXPA0UWbF+pb7Q9cdKDNFVuvGBljuea0Yd0T2o+ibDq43HziX9ll+l
SXk9mqvW1UcDOaxWrSsm1Gc1O9g3wqH5xHAhtY8GPh/7VgAawskPkmnlkMW6pYPy
zibbeISJL1gd1jIT63y6aoVrtNoo+wYJm280ROflh4+5QOo6QJ+jm70fkXSG/qJ5
a8/qCPTHkJc/rpkL6/TDQAJURi9RhDAC0gb40HtusbN1LZEA+i0cWTmYXap+DB4Y
R4pApilpaG87M+VUokR4xpnx7vTb2MPa7Mdenvi9FEGnKXadmT8038vlfzz5GGUT
MlVin9BQPTpdA+PpRiJvKJgVDeAFOg==
=asTC
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -1,11 +1,18 @@
#!/bin/bash
operating_system() {
grep -E '^(VERSION_)?ID=' /etc/os-release | \
sort | cut -d '=' -f 2- | sed 's/"//g' | paste -s -d '-'
function operating_system() {
if [[ "$OSTYPE" == "linux-gnu"* ]]; then
grep -E '^(VERSION_)?ID=' /etc/os-release | \
sort | cut -d '=' -f 2- | sed 's/"//g' | paste -s -d '-'
elif [[ "$OSTYPE" == "darwin"* ]]; then
echo "$(sw_vers -productName)-$(sw_vers -productVersion | cut -d '.' -f 1)"
else
echo "operating_system called on an unknown OS"
exit 1
fi
}
check_operating_system() {
function check_operating_system() {
if [ "$(operating_system)" != "$1" ]; then
echo "Not the right operating system!"
exit 1
@ -14,20 +21,25 @@ check_operating_system() {
fi
}
architecture() {
function architecture() {
uname -m
}
check_architecture() {
if [ "$(architecture)" != "$1" ]; then
echo "Not the right architecture!"
exit 1
else
echo "The right architecture."
fi
local ARCH=$(architecture)
for arch in "$@"; do
if [ "${ARCH}" = "$arch" ]; then
echo "The right architecture!"
return 0
fi
done
echo "Not the right architecture!"
echo "Expected: $@"
echo "Actual: ${ARCH}"
exit 1
}
check_all_yum() {
function check_all_yum() {
local missing=""
for pkg in $1; do
if ! yum list installed "$pkg" >/dev/null 2>/dev/null; then
@ -40,7 +52,7 @@ check_all_yum() {
fi
}
check_all_dpkg() {
function check_all_dpkg() {
local missing=""
for pkg in $1; do
if ! dpkg -s "$pkg" >/dev/null 2>/dev/null; then
@ -53,7 +65,7 @@ check_all_dpkg() {
fi
}
check_all_dnf() {
function check_all_dnf() {
local missing=""
for pkg in $1; do
if ! dnf list installed "$pkg" >/dev/null 2>/dev/null; then
@ -65,8 +77,34 @@ check_all_dnf() {
exit 1
fi
}
install_all_apt() {
function install_all_apt() {
for pkg in $1; do
apt install -y "$pkg"
done
}
function install_custom_golang() {
# NOTE: The official https://go.dev/doc/manage-install doesn't seem to be working.
GOVERSION="$1"
GOINSTALLDIR="/opt/go$GOVERSION"
GOROOT="$GOINSTALLDIR/go" # GOPATH=$HOME/go
if [ ! -f "$GOROOT/bin/go" ]; then
curl -LO https://go.dev/dl/go$GOVERSION.linux-amd64.tar.gz
mkdir -p "$GOINSTALLDIR"
tar -C "$GOINSTALLDIR" -xzf go$GOVERSION.linux-amd64.tar.gz
fi
echo "go $GOVERSION installed under $GOROOT"
}
function install_custom_maven() {
MVNVERSION="$1"
MVNINSTALLDIR="/opt/apache-maven-$MVNVERSION"
MVNURL="https://s3.eu-west-1.amazonaws.com/deps.memgraph.io/maven/apache-maven-$MVNVERSION-bin.tar.gz"
if [ ! -f "$MVNINSTALLDIR/bin/mvn" ]; then
echo "Downloading maven from $MVNURL"
curl -LO "$MVNURL"
tar -C "/opt" -xzf "apache-maven-$MVNVERSION-bin.tar.gz"
fi
echo "maven $MVNVERSION installed under $MVNINSTALLDIR"
}

26
import/mglogs2cypherl.sh Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 memgraph_logs_file_path cypherl_output_path"
exit 1
}
if [ "$#" -ne 2 ]; then
print_help
fi
INPUT="$1"
OUTPUT="$2"
if [ ! -f "$INPUT" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} memgraph_logs_file_path is not a file!"
print_help
fi
awk -v RS="Run] '" 'NR>1 { print $0 }' < "$INPUT" | sed -e "/^\[/d;" -e "s/'\([^']*\)$/;/g" > "$OUTPUT"
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl file under $OUTPUT"
echo ""
echo "Import can be done by executing => \`cat $OUTPUT | mgconsole\`"

39
import/n2mg_cypherl.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_path output_file_path"
exit 1
}
if [ "$#" -ne 2 ]; then
print_help
fi
INPUT="$1"
OUTPUT="$2"
if [ ! -f "$INPUT" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/^:begin/BEGIN/g; s/^BEGIN$/BEGIN;/g;' \
-e 's/^:commit/COMMIT/g; s/^COMMIT$/COMMIT;/g;' \
-e '/^CALL/d; /^SCHEMA AWAIT/d;' \
-e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d; /^DROP CONSTRAINT/d;' "$INPUT" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"
echo ""
echo "Please import data by executing => \`cat $OUTPUT | mgconsole\`"

View File

@ -0,0 +1,61 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_path"
exit 1
}
if [ "$#" -ne 5 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT="$5"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT"
cat "$INPUT_NODES" >> "$OUTPUT"
cat "$INPUT_RELATIONSHIPS" >> "$OUTPUT"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher file under $OUTPUT"
echo ""
echo "Please import data by executing => \`cat $OUTPUT | mgconsole\`"

View File

@ -0,0 +1,64 @@
#!/bin/bash -e
COLOR_ORANGE="\e[38;5;208m"
COLOR_GREEN="\e[38;5;35m"
COLOR_RED="\e[0;31m"
COLOR_NULL="\e[0m"
print_help() {
echo -e "${COLOR_ORANGE}HOW TO RUN:${COLOR_NULL} $0 input_file_schema_path input_file_nodes_path input_file_relationships_path input_file_cleanup_path output_file_schema_path output_file_nodes_path output_file_relationships_path output_file_cleanup_path"
exit 1
}
if [ "$#" -ne 8 ]; then
print_help
fi
INPUT_SCHEMA="$1"
INPUT_NODES="$2"
INPUT_RELATIONSHIPS="$3"
INPUT_CLEANUP="$4"
OUTPUT_SCHEMA="$5"
OUTPUT_NODES="$6"
OUTPUT_RELATIONSHIPS="$7"
OUTPUT_CLEANUP="$8"
if [ ! -f "$INPUT_SCHEMA" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_NODES" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_RELATIONSHIPS" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
if [ ! -f "$INPUT_CLEANUP" ]; then
echo -e "${COLOR_RED}ERROR:${COLOR_NULL} input_file_path is not a file!"
print_help
fi
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} BEGIN and COMMIT are required because variables share the same name (e.g. row)"
echo -e "${COLOR_ORANGE}NOTE:${COLOR_NULL} CONSTRAINTS are just skipped -> ${COLOR_RED}please create constraints manually if needed${COLOR_NULL}"
echo 'CREATE INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' > "$OUTPUT_SCHEMA"
sed -e 's/CREATE RANGE INDEX FOR (n:/CREATE INDEX ON :/g;' \
-e 's/) ON (n./(/g;' \
-e '/^CREATE CONSTRAINT/d' $INPUT_SCHEMA >> "$OUTPUT_SCHEMA"
cat "$INPUT_NODES" > "$OUTPUT_NODES"
cat "$INPUT_RELATIONSHIPS" > "$OUTPUT_RELATIONSHIPS"
sed -e '/^DROP CONSTRAINT/d' "$INPUT_CLEANUP" >> "$OUTPUT_CLEANUP"
echo 'DROP INDEX ON :`UNIQUE IMPORT LABEL`(`UNIQUE IMPORT ID`);' >> "$OUTPUT_CLEANUP"
echo ""
echo -e "${COLOR_GREEN}DONE!${COLOR_NULL} Please find Memgraph compatible cypherl|.cypher files under $OUTPUT_SCHEMA, $OUTPUT_NODES, $OUTPUT_RELATIONSHIPS and $OUTPUT_CLEANUP"
echo ""
echo "Please import data by executing => \`cat $OUTPUT_SCHEMA | mgconsole\`, \`cat $OUTPUT_NODES | mgconsole\`, \`cat $OUTPUT_RELATIONSHIPS | mgconsole\` and \`cat $OUTPUT_CLEANUP | mgconsole\`"

869
include/_mgp.hpp Normal file
View File

@ -0,0 +1,869 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
/// @file _mgp.hpp
///
/// The file contains methods that connect mg procedures and the outside code
/// Methods like mapping a graph into memory or assigning new mg results or
/// their properties are implemented.
#pragma once
#include "mg_exceptions.hpp"
#include "mg_procedure.h"
namespace mgp {
namespace {
inline void MgExceptionHandle(mgp_error result_code) {
switch (result_code) {
case mgp_error::MGP_ERROR_UNKNOWN_ERROR:
throw mg_exception::UnknownException();
case mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE:
throw mg_exception::AllocationException();
case mgp_error::MGP_ERROR_INSUFFICIENT_BUFFER:
throw mg_exception::InsufficientBufferException();
case mgp_error::MGP_ERROR_OUT_OF_RANGE:
throw mg_exception::OutOfRangeException();
case mgp_error::MGP_ERROR_LOGIC_ERROR:
throw mg_exception::LogicException();
case mgp_error::MGP_ERROR_DELETED_OBJECT:
throw mg_exception::DeletedObjectException();
case mgp_error::MGP_ERROR_INVALID_ARGUMENT:
throw mg_exception::InvalidArgumentException();
case mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS:
throw mg_exception::KeyAlreadyExistsException();
case mgp_error::MGP_ERROR_IMMUTABLE_OBJECT:
throw mg_exception::ImmutableObjectException();
case mgp_error::MGP_ERROR_VALUE_CONVERSION:
throw mg_exception::ValueConversionException();
case mgp_error::MGP_ERROR_SERIALIZATION_ERROR:
throw mg_exception::SerializationException();
default:
return;
}
}
template <typename TResult, typename TFunc, typename... TArgs>
TResult MgInvoke(TFunc func, TArgs... args) {
TResult result{};
auto result_code = func(args..., &result);
MgExceptionHandle(result_code);
return result;
}
template <typename TFunc, typename... TArgs>
inline void MgInvokeVoid(TFunc func, TArgs... args) {
auto result_code = func(args...);
MgExceptionHandle(result_code);
}
} // namespace
// mgp_value
// Make value
inline mgp_value *value_make_null(mgp_memory *memory) { return MgInvoke<mgp_value *>(mgp_value_make_null, memory); }
inline mgp_value *value_make_bool(int val, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_value_make_bool, val, memory);
}
inline mgp_value *value_make_int(int64_t val, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_value_make_int, val, memory);
}
inline mgp_value *value_make_double(double val, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_value_make_double, val, memory);
}
inline mgp_value *value_make_string(const char *val, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_value_make_string, val, memory);
}
inline mgp_value *value_make_list(mgp_list *val) { return MgInvoke<mgp_value *>(mgp_value_make_list, val); }
inline mgp_value *value_make_map(mgp_map *val) { return MgInvoke<mgp_value *>(mgp_value_make_map, val); }
inline mgp_value *value_make_vertex(mgp_vertex *val) { return MgInvoke<mgp_value *>(mgp_value_make_vertex, val); }
inline mgp_value *value_make_edge(mgp_edge *val) { return MgInvoke<mgp_value *>(mgp_value_make_edge, val); }
inline mgp_value *value_make_path(mgp_path *val) { return MgInvoke<mgp_value *>(mgp_value_make_path, val); }
inline mgp_value *value_make_date(mgp_date *val) { return MgInvoke<mgp_value *>(mgp_value_make_date, val); }
inline mgp_value *value_make_local_time(mgp_local_time *val) {
return MgInvoke<mgp_value *>(mgp_value_make_local_time, val);
}
inline mgp_value *value_make_local_date_time(mgp_local_date_time *val) {
return MgInvoke<mgp_value *>(mgp_value_make_local_date_time, val);
}
inline mgp_value *value_make_duration(mgp_duration *val) { return MgInvoke<mgp_value *>(mgp_value_make_duration, val); }
// Copy value
// TODO: implement within MGP API
// with primitive types ({bool, int, double, string}), create a new identical value
// otherwise call mgp_##TYPE_copy and convert tpye
inline mgp_value *value_copy(mgp_value *val, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_value_copy, val, memory);
}
// Destroy value
inline void value_destroy(mgp_value *val) { mgp_value_destroy(val); }
// Get value of type
inline mgp_value_type value_get_type(mgp_value *val) { return MgInvoke<mgp_value_type>(mgp_value_get_type, val); }
inline bool value_get_bool(mgp_value *val) { return MgInvoke<int>(mgp_value_get_bool, val); }
inline int64_t value_get_int(mgp_value *val) { return MgInvoke<int64_t>(mgp_value_get_int, val); }
inline double value_get_double(mgp_value *val) { return MgInvoke<double>(mgp_value_get_double, val); }
inline double value_get_numeric(mgp_value *val) {
if (MgInvoke<int>(mgp_value_is_int, val)) {
return static_cast<double>(value_get_int(val));
}
return value_get_double(val);
}
inline const char *value_get_string(mgp_value *val) { return MgInvoke<const char *>(mgp_value_get_string, val); }
inline mgp_list *value_get_list(mgp_value *val) { return MgInvoke<mgp_list *>(mgp_value_get_list, val); }
inline mgp_map *value_get_map(mgp_value *val) { return MgInvoke<mgp_map *>(mgp_value_get_map, val); }
inline mgp_vertex *value_get_vertex(mgp_value *val) { return MgInvoke<mgp_vertex *>(mgp_value_get_vertex, val); }
inline mgp_edge *value_get_edge(mgp_value *val) { return MgInvoke<mgp_edge *>(mgp_value_get_edge, val); }
inline mgp_path *value_get_path(mgp_value *val) { return MgInvoke<mgp_path *>(mgp_value_get_path, val); }
inline mgp_date *value_get_date(mgp_value *val) { return MgInvoke<mgp_date *>(mgp_value_get_date, val); }
inline mgp_local_time *value_get_local_time(mgp_value *val) {
return MgInvoke<mgp_local_time *>(mgp_value_get_local_time, val);
}
inline mgp_local_date_time *value_get_local_date_time(mgp_value *val) {
return MgInvoke<mgp_local_date_time *>(mgp_value_get_local_date_time, val);
}
inline mgp_duration *value_get_duration(mgp_value *val) {
return MgInvoke<mgp_duration *>(mgp_value_get_duration, val);
}
// Check type of value
inline bool value_is_null(mgp_value *val) { return MgInvoke<int>(mgp_value_is_null, val); }
inline bool value_is_bool(mgp_value *val) { return MgInvoke<int>(mgp_value_is_bool, val); }
inline bool value_is_int(mgp_value *val) { return MgInvoke<int>(mgp_value_is_int, val); }
inline bool value_is_double(mgp_value *val) { return MgInvoke<int>(mgp_value_is_double, val); }
inline bool value_is_numeric(mgp_value *val) { return value_is_int(val) || value_is_double(val); }
inline bool value_is_string(mgp_value *val) { return MgInvoke<int>(mgp_value_is_string, val); }
inline bool value_is_list(mgp_value *val) { return MgInvoke<int>(mgp_value_is_list, val); }
inline bool value_is_map(mgp_value *val) { return MgInvoke<int>(mgp_value_is_map, val); }
inline bool value_is_vertex(mgp_value *val) { return MgInvoke<int>(mgp_value_is_vertex, val); }
inline bool value_is_edge(mgp_value *val) { return MgInvoke<int>(mgp_value_is_edge, val); }
inline bool value_is_path(mgp_value *val) { return MgInvoke<int>(mgp_value_is_path, val); }
inline bool value_is_date(mgp_value *val) { return MgInvoke<int>(mgp_value_is_date, val); }
inline bool value_is_local_time(mgp_value *val) { return MgInvoke<int>(mgp_value_is_local_time, val); }
inline bool value_is_local_date_time(mgp_value *val) { return MgInvoke<int>(mgp_value_is_local_date_time, val); }
inline bool value_is_duration(mgp_value *val) { return MgInvoke<int>(mgp_value_is_duration, val); }
// Get type
inline mgp_type *type_any() { return MgInvoke<mgp_type *>(mgp_type_any); }
inline mgp_type *type_bool() { return MgInvoke<mgp_type *>(mgp_type_bool); }
inline mgp_type *type_string() { return MgInvoke<mgp_type *>(mgp_type_string); }
inline mgp_type *type_int() { return MgInvoke<mgp_type *>(mgp_type_int); }
inline mgp_type *type_float() { return MgInvoke<mgp_type *>(mgp_type_float); }
inline mgp_type *type_number() { return MgInvoke<mgp_type *>(mgp_type_number); }
inline mgp_type *type_list(mgp_type *element_type) { return MgInvoke<mgp_type *>(mgp_type_list, element_type); }
inline mgp_type *type_map() { return MgInvoke<mgp_type *>(mgp_type_map); }
inline mgp_type *type_node() { return MgInvoke<mgp_type *>(mgp_type_node); }
inline mgp_type *type_relationship() { return MgInvoke<mgp_type *>(mgp_type_relationship); }
inline mgp_type *type_path() { return MgInvoke<mgp_type *>(mgp_type_path); }
inline mgp_type *type_date() { return MgInvoke<mgp_type *>(mgp_type_date); }
inline mgp_type *type_local_time() { return MgInvoke<mgp_type *>(mgp_type_local_time); }
inline mgp_type *type_local_date_time() { return MgInvoke<mgp_type *>(mgp_type_local_date_time); }
inline mgp_type *type_duration() { return MgInvoke<mgp_type *>(mgp_type_duration); }
inline mgp_type *type_nullable(mgp_type *type) { return MgInvoke<mgp_type *>(mgp_type_nullable, type); }
inline bool create_label_index(mgp_graph *graph, const char *label) {
return MgInvoke<int>(mgp_create_label_index, graph, label);
}
inline bool drop_label_index(mgp_graph *graph, const char *label) {
return MgInvoke<int>(mgp_drop_label_index, graph, label);
}
inline mgp_list *list_all_label_indices(mgp_graph *graph, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_all_label_indices, graph, memory);
}
inline bool create_label_property_index(mgp_graph *graph, const char *label, const char *property) {
return MgInvoke<int>(mgp_create_label_property_index, graph, label, property);
}
inline bool drop_label_property_index(mgp_graph *graph, const char *label, const char *property) {
return MgInvoke<int>(mgp_drop_label_property_index, graph, label, property);
}
inline mgp_list *list_all_label_property_indices(mgp_graph *graph, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_all_label_property_indices, graph, memory);
}
inline bool create_existence_constraint(mgp_graph *graph, const char *label, const char *property) {
return MgInvoke<int>(mgp_create_existence_constraint, graph, label, property);
}
inline bool drop_existence_constraint(mgp_graph *graph, const char *label, const char *property) {
return MgInvoke<int>(mgp_drop_existence_constraint, graph, label, property);
}
inline mgp_list *list_all_existence_constraints(mgp_graph *graph, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_all_existence_constraints, graph, memory);
}
inline bool create_unique_constraint(mgp_graph *memgraph_graph, const char *label, mgp_value *properties) {
return MgInvoke<int>(mgp_create_unique_constraint, memgraph_graph, label, properties);
}
inline bool drop_unique_constraint(mgp_graph *memgraph_graph, const char *label, mgp_value *properties) {
return MgInvoke<int>(mgp_drop_unique_constraint, memgraph_graph, label, properties);
}
inline mgp_list *list_all_unique_constraints(mgp_graph *graph, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_all_unique_constraints, graph, memory);
}
// mgp_graph
inline bool graph_is_transactional(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_transactional, graph); }
inline bool graph_is_mutable(mgp_graph *graph) { return MgInvoke<int>(mgp_graph_is_mutable, graph); }
inline mgp_vertex *graph_create_vertex(mgp_graph *graph, mgp_memory *memory) {
return MgInvoke<mgp_vertex *>(mgp_graph_create_vertex, graph, memory);
}
inline void graph_delete_vertex(mgp_graph *graph, mgp_vertex *vertex) {
MgInvokeVoid(mgp_graph_delete_vertex, graph, vertex);
}
inline void graph_detach_delete_vertex(mgp_graph *graph, mgp_vertex *vertex) {
MgInvokeVoid(mgp_graph_detach_delete_vertex, graph, vertex);
}
inline mgp_edge *graph_create_edge(mgp_graph *graph, mgp_vertex *from, mgp_vertex *to, mgp_edge_type type,
mgp_memory *memory) {
return MgInvoke<mgp_edge *>(mgp_graph_create_edge, graph, from, to, type, memory);
}
inline mgp_edge *graph_edge_set_from(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_vertex *new_from,
mgp_memory *memory) {
return MgInvoke<mgp_edge *>(mgp_graph_edge_set_from, graph, e, new_from, memory);
}
inline mgp_edge *graph_edge_set_to(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_vertex *new_to,
mgp_memory *memory) {
return MgInvoke<mgp_edge *>(mgp_graph_edge_set_to, graph, e, new_to, memory);
}
inline mgp_edge *graph_edge_change_type(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_edge_type new_type,
mgp_memory *memory) {
return MgInvoke<mgp_edge *>(mgp_graph_edge_change_type, graph, e, new_type, memory);
}
inline void graph_delete_edge(mgp_graph *graph, mgp_edge *edge) { MgInvokeVoid(mgp_graph_delete_edge, graph, edge); }
inline mgp_vertex *graph_get_vertex_by_id(mgp_graph *g, mgp_vertex_id id, mgp_memory *memory) {
return MgInvoke<mgp_vertex *>(mgp_graph_get_vertex_by_id, g, id, memory);
}
inline bool graph_has_text_index(mgp_graph *graph, const char *index_name) {
return MgInvoke<int>(mgp_graph_has_text_index, graph, index_name);
}
inline mgp_map *graph_search_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
text_search_mode search_mode, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_search_text_index, graph, index_name, search_query, search_mode, memory);
}
inline mgp_map *graph_aggregate_over_text_index(mgp_graph *graph, const char *index_name, const char *search_query,
const char *aggregation_query, mgp_memory *memory) {
return MgInvoke<mgp_map *>(mgp_graph_aggregate_over_text_index, graph, index_name, search_query, aggregation_query,
memory);
}
inline mgp_vertices_iterator *graph_iter_vertices(mgp_graph *g, mgp_memory *memory) {
return MgInvoke<mgp_vertices_iterator *>(mgp_graph_iter_vertices, g, memory);
}
// mgp_vertices_iterator
inline void vertices_iterator_destroy(mgp_vertices_iterator *it) { mgp_vertices_iterator_destroy(it); }
inline mgp_vertex *vertices_iterator_get(mgp_vertices_iterator *it) {
return MgInvoke<mgp_vertex *>(mgp_vertices_iterator_get, it);
}
inline mgp_vertex *vertices_iterator_next(mgp_vertices_iterator *it) {
return MgInvoke<mgp_vertex *>(mgp_vertices_iterator_next, it);
}
// mgp_edges_iterator
inline void edges_iterator_destroy(mgp_edges_iterator *it) { mgp_edges_iterator_destroy(it); }
inline mgp_edge *edges_iterator_get(mgp_edges_iterator *it) { return MgInvoke<mgp_edge *>(mgp_edges_iterator_get, it); }
inline mgp_edge *edges_iterator_next(mgp_edges_iterator *it) {
return MgInvoke<mgp_edge *>(mgp_edges_iterator_next, it);
}
// mgp_properties_iterator
inline void properties_iterator_destroy(mgp_properties_iterator *it) { mgp_properties_iterator_destroy(it); }
inline mgp_property *properties_iterator_get(mgp_properties_iterator *it) {
return MgInvoke<mgp_property *>(mgp_properties_iterator_get, it);
}
inline mgp_property *properties_iterator_next(mgp_properties_iterator *it) {
return MgInvoke<mgp_property *>(mgp_properties_iterator_next, it);
}
// Container {mgp_list, mgp_map} methods
// mgp_list
inline mgp_list *list_make_empty(size_t capacity, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_make_empty, capacity, memory);
}
inline mgp_list *list_copy(mgp_list *list, mgp_memory *memory) {
return MgInvoke<mgp_list *>(mgp_list_copy, list, memory);
}
inline void list_destroy(mgp_list *list) { mgp_list_destroy(list); }
inline bool list_contains_deleted(mgp_list *list) { return MgInvoke<int>(mgp_list_contains_deleted, list); }
inline void list_append(mgp_list *list, mgp_value *val) { MgInvokeVoid(mgp_list_append, list, val); }
inline void list_append_extend(mgp_list *list, mgp_value *val) { MgInvokeVoid(mgp_list_append_extend, list, val); }
inline size_t list_size(mgp_list *list) { return MgInvoke<size_t>(mgp_list_size, list); }
inline size_t list_capacity(mgp_list *list) { return MgInvoke<size_t>(mgp_list_capacity, list); }
inline mgp_value *list_at(mgp_list *list, size_t index) { return MgInvoke<mgp_value *>(mgp_list_at, list, index); }
// mgp_map
inline mgp_map *map_make_empty(mgp_memory *memory) { return MgInvoke<mgp_map *>(mgp_map_make_empty, memory); }
inline mgp_map *map_copy(mgp_map *map, mgp_memory *memory) { return MgInvoke<mgp_map *>(mgp_map_copy, map, memory); }
inline void map_destroy(mgp_map *map) { mgp_map_destroy(map); }
inline bool map_contains_deleted(mgp_map *map) { return MgInvoke<int>(mgp_map_contains_deleted, map); }
inline void map_insert(mgp_map *map, const char *key, mgp_value *value) {
MgInvokeVoid(mgp_map_insert, map, key, value);
}
inline void map_update(mgp_map *map, const char *key, mgp_value *value) {
MgInvokeVoid(mgp_map_update, map, key, value);
}
inline void map_erase(mgp_map *map, const char *key) { MgInvokeVoid(mgp_map_erase, map, key); }
inline size_t map_size(mgp_map *map) { return MgInvoke<size_t>(mgp_map_size, map); }
inline mgp_value *map_at(mgp_map *map, const char *key) { return MgInvoke<mgp_value *>(mgp_map_at, map, key); }
inline bool key_exists(mgp_map *map, const char *key) { return MgInvoke<int>(mgp_key_exists, map, key); }
inline const char *map_item_key(mgp_map_item *item) { return MgInvoke<const char *>(mgp_map_item_key, item); }
inline mgp_value *map_item_value(mgp_map_item *item) { return MgInvoke<mgp_value *>(mgp_map_item_value, item); }
inline mgp_map_items_iterator *map_iter_items(mgp_map *map, mgp_memory *memory) {
return MgInvoke<mgp_map_items_iterator *>(mgp_map_iter_items, map, memory);
}
inline void map_items_iterator_destroy(mgp_map_items_iterator *it) { mgp_map_items_iterator_destroy(it); }
inline mgp_map_item *map_items_iterator_get(mgp_map_items_iterator *it) {
return MgInvoke<mgp_map_item *>(mgp_map_items_iterator_get, it);
}
inline mgp_map_item *map_items_iterator_next(mgp_map_items_iterator *it) {
return MgInvoke<mgp_map_item *>(mgp_map_items_iterator_next, it);
}
// mgp_vertex
inline mgp_vertex_id vertex_get_id(mgp_vertex *v) { return MgInvoke<mgp_vertex_id>(mgp_vertex_get_id, v); }
inline size_t vertex_get_in_degree(mgp_vertex *v) { return MgInvoke<size_t>(mgp_vertex_get_in_degree, v); }
inline size_t vertex_get_out_degree(mgp_vertex *v) { return MgInvoke<size_t>(mgp_vertex_get_out_degree, v); }
inline mgp_vertex *vertex_copy(mgp_vertex *v, mgp_memory *memory) {
return MgInvoke<mgp_vertex *>(mgp_vertex_copy, v, memory);
}
inline void vertex_destroy(mgp_vertex *v) { mgp_vertex_destroy(v); }
inline bool vertex_is_deleted(mgp_vertex *v) { return MgInvoke<int>(mgp_vertex_is_deleted, v); }
inline bool vertex_equal(mgp_vertex *v1, mgp_vertex *v2) { return MgInvoke<int>(mgp_vertex_equal, v1, v2); }
inline size_t vertex_labels_count(mgp_vertex *v) { return MgInvoke<size_t>(mgp_vertex_labels_count, v); }
inline mgp_label vertex_label_at(mgp_vertex *v, size_t index) {
return MgInvoke<mgp_label>(mgp_vertex_label_at, v, index);
}
inline bool vertex_has_label(mgp_vertex *v, mgp_label label) { return MgInvoke<int>(mgp_vertex_has_label, v, label); }
inline bool vertex_has_label_named(mgp_vertex *v, const char *label_name) {
return MgInvoke<int>(mgp_vertex_has_label_named, v, label_name);
}
inline void vertex_add_label(mgp_vertex *vertex, mgp_label label) { MgInvokeVoid(mgp_vertex_add_label, vertex, label); }
inline void vertex_remove_label(mgp_vertex *vertex, mgp_label label) {
MgInvokeVoid(mgp_vertex_remove_label, vertex, label);
}
inline mgp_value *vertex_get_property(mgp_vertex *v, const char *property_name, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_vertex_get_property, v, property_name, memory);
}
inline void vertex_set_property(mgp_vertex *v, const char *property_name, mgp_value *property_value) {
MgInvokeVoid(mgp_vertex_set_property, v, property_name, property_value);
}
inline void vertex_set_properties(mgp_vertex *v, struct mgp_map *properties) {
MgInvokeVoid(mgp_vertex_set_properties, v, properties);
}
inline mgp_properties_iterator *vertex_iter_properties(mgp_vertex *v, mgp_memory *memory) {
return MgInvoke<mgp_properties_iterator *>(mgp_vertex_iter_properties, v, memory);
}
inline mgp_edges_iterator *vertex_iter_in_edges(mgp_vertex *v, mgp_memory *memory) {
return MgInvoke<mgp_edges_iterator *>(mgp_vertex_iter_in_edges, v, memory);
}
inline mgp_edges_iterator *vertex_iter_out_edges(mgp_vertex *v, mgp_memory *memory) {
return MgInvoke<mgp_edges_iterator *>(mgp_vertex_iter_out_edges, v, memory);
}
// mgp_edge
inline mgp_edge_id edge_get_id(mgp_edge *e) { return MgInvoke<mgp_edge_id>(mgp_edge_get_id, e); }
inline mgp_edge *edge_copy(mgp_edge *e, mgp_memory *memory) { return MgInvoke<mgp_edge *>(mgp_edge_copy, e, memory); }
inline void edge_destroy(mgp_edge *e) { mgp_edge_destroy(e); }
inline bool edge_is_deleted(mgp_edge *e) { return MgInvoke<int>(mgp_edge_is_deleted, e); }
inline bool edge_equal(mgp_edge *e1, mgp_edge *e2) { return MgInvoke<int>(mgp_edge_equal, e1, e2); }
inline mgp_edge_type edge_get_type(mgp_edge *e) { return MgInvoke<mgp_edge_type>(mgp_edge_get_type, e); }
inline mgp_vertex *edge_get_from(mgp_edge *e) { return MgInvoke<mgp_vertex *>(mgp_edge_get_from, e); }
inline mgp_vertex *edge_get_to(mgp_edge *e) { return MgInvoke<mgp_vertex *>(mgp_edge_get_to, e); }
inline mgp_value *edge_get_property(mgp_edge *e, const char *property_name, mgp_memory *memory) {
return MgInvoke<mgp_value *>(mgp_edge_get_property, e, property_name, memory);
}
inline void edge_set_property(mgp_edge *e, const char *property_name, mgp_value *property_value) {
MgInvokeVoid(mgp_edge_set_property, e, property_name, property_value);
}
inline void edge_set_properties(mgp_edge *e, struct mgp_map *properties) {
MgInvokeVoid(mgp_edge_set_properties, e, properties);
}
inline mgp_properties_iterator *edge_iter_properties(mgp_edge *e, mgp_memory *memory) {
return MgInvoke<mgp_properties_iterator *>(mgp_edge_iter_properties, e, memory);
}
// mgp_path
inline mgp_path *path_make_with_start(mgp_vertex *vertex, mgp_memory *memory) {
return MgInvoke<mgp_path *>(mgp_path_make_with_start, vertex, memory);
}
inline mgp_path *path_copy(mgp_path *path, mgp_memory *memory) {
return MgInvoke<mgp_path *>(mgp_path_copy, path, memory);
}
inline void path_destroy(mgp_path *path) { mgp_path_destroy(path); }
inline bool path_contains_deleted(mgp_path *path) { return MgInvoke<int>(mgp_path_contains_deleted, path); }
inline void path_expand(mgp_path *path, mgp_edge *edge) { MgInvokeVoid(mgp_path_expand, path, edge); }
inline void path_pop(mgp_path *path) { MgInvokeVoid(mgp_path_pop, path); }
inline size_t path_size(mgp_path *path) { return MgInvoke<size_t>(mgp_path_size, path); }
inline mgp_vertex *path_vertex_at(mgp_path *path, size_t index) {
return MgInvoke<mgp_vertex *>(mgp_path_vertex_at, path, index);
}
inline mgp_edge *path_edge_at(mgp_path *path, size_t index) {
return MgInvoke<mgp_edge *>(mgp_path_edge_at, path, index);
}
inline bool path_equal(mgp_path *p1, mgp_path *p2) { return MgInvoke<int>(mgp_path_equal, p1, p2); }
// Temporal type {mgp_date, mgp_local_time, mgp_local_date_time, mgp_duration} methods
// mgp_date
inline mgp_date *date_from_string(const char *string, mgp_memory *memory) {
return MgInvoke<mgp_date *>(mgp_date_from_string, string, memory);
}
inline mgp_date *date_from_parameters(mgp_date_parameters *parameters, mgp_memory *memory) {
return MgInvoke<mgp_date *>(mgp_date_from_parameters, parameters, memory);
}
inline mgp_date *date_copy(mgp_date *date, mgp_memory *memory) {
return MgInvoke<mgp_date *>(mgp_date_copy, date, memory);
}
inline void date_destroy(mgp_date *date) { mgp_date_destroy(date); }
inline bool date_equal(mgp_date *first, mgp_date *second) { return MgInvoke<int>(mgp_date_equal, first, second); }
inline int date_get_year(mgp_date *date) { return MgInvoke<int>(mgp_date_get_year, date); }
inline int date_get_month(mgp_date *date) { return MgInvoke<int>(mgp_date_get_month, date); }
inline int date_get_day(mgp_date *date) { return MgInvoke<int>(mgp_date_get_day, date); }
inline int64_t date_timestamp(mgp_date *date) { return MgInvoke<int64_t>(mgp_date_timestamp, date); }
inline mgp_date *date_now(mgp_memory *memory) { return MgInvoke<mgp_date *>(mgp_date_now, memory); }
inline mgp_date *date_add_duration(mgp_date *date, mgp_duration *dur, mgp_memory *memory) {
return MgInvoke<mgp_date *>(mgp_date_add_duration, date, dur, memory);
}
inline mgp_date *date_sub_duration(mgp_date *date, mgp_duration *dur, mgp_memory *memory) {
return MgInvoke<mgp_date *>(mgp_date_sub_duration, date, dur, memory);
}
inline mgp_duration *date_diff(mgp_date *first, mgp_date *second, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_date_diff, first, second, memory);
}
// mgp_local_time
inline mgp_local_time *local_time_from_string(const char *string, mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_from_string, string, memory);
}
inline mgp_local_time *local_time_from_parameters(mgp_local_time_parameters *parameters, mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_from_parameters, parameters, memory);
}
inline mgp_local_time *local_time_copy(mgp_local_time *local_time, mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_copy, local_time, memory);
}
inline void local_time_destroy(mgp_local_time *local_time) { mgp_local_time_destroy(local_time); }
inline bool local_time_equal(mgp_local_time *first, mgp_local_time *second) {
return MgInvoke<int>(mgp_local_time_equal, first, second);
}
inline int local_time_get_hour(mgp_local_time *local_time) {
return MgInvoke<int>(mgp_local_time_get_hour, local_time);
}
inline int local_time_get_minute(mgp_local_time *local_time) {
return MgInvoke<int>(mgp_local_time_get_minute, local_time);
}
inline int local_time_get_second(mgp_local_time *local_time) {
return MgInvoke<int>(mgp_local_time_get_second, local_time);
}
inline int local_time_get_millisecond(mgp_local_time *local_time) {
return MgInvoke<int>(mgp_local_time_get_millisecond, local_time);
}
inline int local_time_get_microsecond(mgp_local_time *local_time) {
return MgInvoke<int>(mgp_local_time_get_microsecond, local_time);
}
inline int64_t local_time_timestamp(mgp_local_time *local_time) {
return MgInvoke<int64_t>(mgp_local_time_timestamp, local_time);
}
inline mgp_local_time *local_time_now(mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_now, memory);
}
inline mgp_local_time *local_time_add_duration(mgp_local_time *local_time, mgp_duration *dur, mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_add_duration, local_time, dur, memory);
}
inline mgp_local_time *local_time_sub_duration(mgp_local_time *local_time, mgp_duration *dur, mgp_memory *memory) {
return MgInvoke<mgp_local_time *>(mgp_local_time_sub_duration, local_time, dur, memory);
}
inline mgp_duration *local_time_diff(mgp_local_time *first, mgp_local_time *second, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_local_time_diff, first, second, memory);
}
// mgp_local_date_time
inline mgp_local_date_time *local_date_time_from_string(const char *string, mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_from_string, string, memory);
}
inline mgp_local_date_time *local_date_time_from_parameters(mgp_local_date_time_parameters *parameters,
mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_from_parameters, parameters, memory);
}
inline mgp_local_date_time *local_date_time_copy(mgp_local_date_time *local_date_time, mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_copy, local_date_time, memory);
}
inline void local_date_time_destroy(mgp_local_date_time *local_date_time) {
mgp_local_date_time_destroy(local_date_time);
}
inline bool local_date_time_equal(mgp_local_date_time *first, mgp_local_date_time *second) {
return MgInvoke<int>(mgp_local_date_time_equal, first, second);
}
inline int local_date_time_get_year(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_year, local_date_time);
}
inline int local_date_time_get_month(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_month, local_date_time);
}
inline int local_date_time_get_day(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_day, local_date_time);
}
inline int local_date_time_get_hour(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_hour, local_date_time);
}
inline int local_date_time_get_minute(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_minute, local_date_time);
}
inline int local_date_time_get_second(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_second, local_date_time);
}
inline int local_date_time_get_millisecond(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_millisecond, local_date_time);
}
inline int local_date_time_get_microsecond(mgp_local_date_time *local_date_time) {
return MgInvoke<int>(mgp_local_date_time_get_microsecond, local_date_time);
}
inline int64_t local_date_time_timestamp(mgp_local_date_time *local_date_time) {
return MgInvoke<int64_t>(mgp_local_date_time_timestamp, local_date_time);
}
inline mgp_local_date_time *local_date_time_now(mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_now, memory);
}
inline mgp_local_date_time *local_date_time_add_duration(mgp_local_date_time *local_date_time, mgp_duration *dur,
mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_add_duration, local_date_time, dur, memory);
}
inline mgp_local_date_time *local_date_time_sub_duration(mgp_local_date_time *local_date_time, mgp_duration *dur,
mgp_memory *memory) {
return MgInvoke<mgp_local_date_time *>(mgp_local_date_time_sub_duration, local_date_time, dur, memory);
}
inline mgp_duration *local_date_time_diff(mgp_local_date_time *first, mgp_local_date_time *second, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_local_date_time_diff, first, second, memory);
}
// mgp_duration
inline mgp_duration *duration_from_string(const char *string, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_from_string, string, memory);
}
inline mgp_duration *duration_from_parameters(mgp_duration_parameters *parameters, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_from_parameters, parameters, memory);
}
inline mgp_duration *duration_from_microseconds(int64_t microseconds, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_from_microseconds, microseconds, memory);
}
inline mgp_duration *duration_copy(mgp_duration *duration, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_copy, duration, memory);
}
inline void duration_destroy(mgp_duration *duration) { mgp_duration_destroy(duration); }
inline int64_t duration_get_microseconds(mgp_duration *duration) {
return MgInvoke<int64_t>(mgp_duration_get_microseconds, duration);
}
inline bool duration_equal(mgp_duration *first, mgp_duration *second) {
return MgInvoke<int>(mgp_duration_equal, first, second);
}
inline mgp_duration *duration_neg(mgp_duration *duration, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_neg, duration, memory);
}
inline mgp_duration *duration_add(mgp_duration *first, mgp_duration *second, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_add, first, second, memory);
}
inline mgp_duration *duration_sub(mgp_duration *first, mgp_duration *second, mgp_memory *memory) {
return MgInvoke<mgp_duration *>(mgp_duration_sub, first, second, memory);
}
// Procedure
inline mgp_proc *module_add_read_procedure(mgp_module *module, const char *name, mgp_proc_cb cb) {
return MgInvoke<mgp_proc *>(mgp_module_add_read_procedure, module, name, cb);
}
inline mgp_proc *module_add_write_procedure(mgp_module *module, const char *name, mgp_proc_cb cb) {
return MgInvoke<mgp_proc *>(mgp_module_add_write_procedure, module, name, cb);
}
inline mgp_proc *module_add_batch_read_procedure(mgp_module *module, const char *name, mgp_proc_cb cb,
mgp_proc_initializer initializer, mgp_proc_cleanup cleanup) {
return MgInvoke<mgp_proc *>(mgp_module_add_batch_read_procedure, module, name, cb, initializer, cleanup);
}
inline mgp_proc *module_add_batch_write_procedure(mgp_module *module, const char *name, mgp_proc_cb cb,
mgp_proc_initializer initializer, mgp_proc_cleanup cleanup) {
return MgInvoke<mgp_proc *>(mgp_module_add_batch_write_procedure, module, name, cb, initializer, cleanup);
}
inline void proc_add_arg(mgp_proc *proc, const char *name, mgp_type *type) {
MgInvokeVoid(mgp_proc_add_arg, proc, name, type);
}
inline void proc_add_opt_arg(mgp_proc *proc, const char *name, mgp_type *type, mgp_value *default_value) {
MgInvokeVoid(mgp_proc_add_opt_arg, proc, name, type, default_value);
}
inline void proc_add_result(mgp_proc *proc, const char *name, mgp_type *type) {
MgInvokeVoid(mgp_proc_add_result, proc, name, type);
}
inline void proc_add_deprecated_result(mgp_proc *proc, const char *name, mgp_type *type) {
MgInvokeVoid(mgp_proc_add_deprecated_result, proc, name, type);
}
inline int must_abort(mgp_graph *graph) { return mgp_must_abort(graph); }
// mgp_result
inline void result_set_error_msg(mgp_result *res, const char *error_msg) {
MgInvokeVoid(mgp_result_set_error_msg, res, error_msg);
}
inline mgp_result_record *result_new_record(mgp_result *res) {
return MgInvoke<mgp_result_record *>(mgp_result_new_record, res);
}
inline void result_record_insert(mgp_result_record *record, const char *field_name, mgp_value *val) {
MgInvokeVoid(mgp_result_record_insert, record, field_name, val);
}
// Function
inline mgp_func *module_add_function(mgp_module *module, const char *name, mgp_func_cb cb) {
return MgInvoke<mgp_func *>(mgp_module_add_function, module, name, cb);
}
inline void func_add_arg(mgp_func *func, const char *name, mgp_type *type) {
MgInvokeVoid(mgp_func_add_arg, func, name, type);
}
inline void func_add_opt_arg(mgp_func *func, const char *name, mgp_type *type, mgp_value *default_value) {
MgInvokeVoid(mgp_func_add_opt_arg, func, name, type, default_value);
}
inline void func_result_set_error_msg(mgp_func_result *res, const char *msg, mgp_memory *memory) {
MgInvokeVoid(mgp_func_result_set_error_msg, res, msg, memory);
}
inline void func_result_set_value(mgp_func_result *res, mgp_value *value, mgp_memory *memory) {
MgInvokeVoid(mgp_func_result_set_value, res, value, memory);
}
} // namespace mgp

350
include/_mgp_mock.py Normal file
View File

@ -0,0 +1,350 @@
import typing
from enum import Enum
import networkx as nx
NX_LABEL_ATTR = "labels"
NX_TYPE_ATTR = "type"
SOURCE_TYPE_KAFKA = "SOURCE_TYPE_KAFKA"
SOURCE_TYPE_PULSAR = "SOURCE_TYPE_PULSAR"
"""
This module provides helpers for the mock Python API, much like _mgp.py does for mgp.py.
"""
class InvalidArgumentError(Exception):
"""
Signals that some of the arguments have invalid values.
"""
pass
class ImmutableObjectError(Exception):
pass
class LogicErrorError(Exception):
pass
class DeletedObjectError(Exception):
pass
class EdgeConstants(Enum):
I_START = 0
I_END = 1
I_KEY = 2
class Graph:
"""Wrapper around a NetworkX MultiDiGraph instance."""
__slots__ = ("nx", "_highest_vertex_id", "_highest_edge_id", "_valid")
def __init__(self, graph: nx.MultiDiGraph) -> None:
if not isinstance(graph, nx.MultiDiGraph):
raise TypeError(f"Expected 'networkx.classes.multidigraph.MultiDiGraph', got '{type(graph)}'")
self.nx = graph
self._highest_vertex_id = None
self._highest_edge_id = None
self._valid = True
@property
def vertex_ids(self):
return self.nx.nodes
def vertex_is_isolate(self, vertex_id: int) -> bool:
return nx.is_isolate(self.nx, vertex_id)
@property
def vertices(self):
return (Vertex(node_id, self) for node_id in self.nx.nodes)
def has_node(self, node_id):
return self.nx.has_node(node_id)
@property
def edges(self):
return self.nx.edges
def is_valid(self) -> bool:
return self._valid
def get_vertex_by_id(self, vertex_id: int) -> "Vertex":
return Vertex(vertex_id, self)
def invalidate(self):
self._valid = False
def is_immutable(self) -> bool:
return nx.is_frozen(self.nx)
def make_immutable(self):
self.nx = nx.freeze(self.nx)
def _new_vertex_id(self):
if self._highest_vertex_id is None:
self._highest_vertex_id = max(vertex_id for vertex_id in self.nx.nodes)
return self._highest_vertex_id + 1
def _new_edge_id(self):
if self._highest_edge_id is None:
self._highest_edge_id = max(edge[EdgeConstants.I_KEY.value] for edge in self.nx.edges(keys=True))
return self._highest_edge_id + 1
def create_vertex(self) -> "Vertex":
vertex_id = self._new_vertex_id()
self.nx.add_node(vertex_id)
self._highest_vertex_id = vertex_id
return Vertex(vertex_id, self)
def create_edge(self, from_vertex: "Vertex", to_vertex: "Vertex", edge_type: str) -> "Edge":
if from_vertex.is_deleted() or to_vertex.is_deleted():
raise DeletedObjectError("Accessing deleted object.")
edge_id = self._new_edge_id()
from_id = from_vertex.id
to_id = to_vertex.id
self.nx.add_edge(from_id, to_id, key=edge_id, type=edge_type)
self._highest_edge_id = edge_id
return Edge((from_id, to_id, edge_id), self)
def delete_vertex(self, vertex_id: int):
self.nx.remove_node(vertex_id)
def delete_edge(self, from_vertex_id: int, to_vertex_id: int, edge_id: int):
self.nx.remove_edge(from_vertex_id, to_vertex_id, edge_id)
@property
def highest_vertex_id(self) -> int:
if self._highest_vertex_id is None:
self._highest_vertex_id = max(vertex_id for vertex_id in self.nx.nodes) + 1
return self._highest_vertex_id
@property
def highest_edge_id(self) -> int:
if self._highest_edge_id is None:
self._highest_edge_id = max(edge[EdgeConstants.I_KEY.value] for edge in self.nx.edges(keys=True))
return self._highest_edge_id + 1
class Vertex:
"""Represents a graph vertex."""
__slots__ = ("_id", "_graph")
def __init__(self, id: int, graph: Graph) -> None:
if not isinstance(id, int):
raise TypeError(f"Expected 'int', got '{type(id)}'")
if not isinstance(graph, Graph):
raise TypeError(f"Expected '_mgp_mock.Graph', got '{type(graph)}'")
if not graph.nx.has_node(id):
raise IndexError(f"Unable to find vertex with ID {id}.")
self._id = id
self._graph = graph
def is_valid(self) -> bool:
return self._graph.is_valid()
def is_deleted(self) -> bool:
return not self._graph.nx.has_node(self._id) and self._id <= self._graph.highest_vertex_id
@property
def underlying_graph(self) -> Graph:
return self._graph
def underlying_graph_is_mutable(self) -> bool:
return not nx.is_frozen(self._graph.nx)
@property
def labels(self) -> typing.List[int]:
return self._graph.nx.nodes[self._id][NX_LABEL_ATTR].split(":")
def add_label(self, label: str) -> None:
if nx.is_frozen(self._graph.nx):
raise ImmutableObjectError("Cannot modify immutable object.")
self._graph.nx.nodes[self._id][NX_LABEL_ATTR] += f":{label}"
def remove_label(self, label: str) -> None:
if nx.is_frozen(self._graph.nx):
raise ImmutableObjectError("Cannot modify immutable object.")
labels = self._graph.nx.nodes[self._id][NX_LABEL_ATTR]
if labels.startswith(f"{label}:"):
labels = "\n" + labels # pseudo-string starter
self._graph.nx.nodes[self._id][NX_LABEL_ATTR] = labels.replace(f"\n{label}:", "")
elif labels.endswith(f":{label}"):
labels += "\n" # pseudo-string terminator
self._graph.nx.nodes[self._id][NX_LABEL_ATTR] = labels.replace(f":{label}\n", "")
else:
self._graph.nx.nodes[self._id][NX_LABEL_ATTR] = labels.replace(f":{label}:", ":")
@property
def id(self) -> int:
return self._id
@property
def properties(self):
return (
(key, value)
for key, value in self._graph.nx.nodes[self._id].items()
if key not in (NX_LABEL_ATTR, NX_TYPE_ATTR)
)
def get_property(self, property_name: str):
return self._graph.nx.nodes[self._id][property_name]
def set_property(self, property_name: str, value: object):
self._graph.nx.nodes[self._id][property_name] = value
@property
def in_edges(self) -> typing.Iterable["Edge"]:
return [Edge(edge, self._graph) for edge in self._graph.nx.in_edges(self._id, keys=True)]
@property
def out_edges(self) -> typing.Iterable["Edge"]:
return [Edge(edge, self._graph) for edge in self._graph.nx.out_edges(self._id, keys=True)]
class Edge:
"""Represents a graph edge."""
__slots__ = ("_edge", "_graph")
def __init__(self, edge: typing.Tuple[int, int, int], graph: Graph) -> None:
if not isinstance(edge, typing.Tuple):
raise TypeError(f"Expected 'Tuple', got '{type(edge)}'")
if not isinstance(graph, Graph):
raise TypeError(f"Expected '_mgp_mock.Graph', got '{type(graph)}'")
if not graph.nx.has_edge(*edge):
raise IndexError(f"Unable to find edge with ID {edge[EdgeConstants.I_KEY.value]}.")
self._edge = edge
self._graph = graph
def is_valid(self) -> bool:
return self._graph.is_valid()
def is_deleted(self) -> bool:
return (
not self._graph.nx.has_edge(*self._edge)
and self._edge[EdgeConstants.I_KEY.value] <= self._graph.highest_edge_id
)
def underlying_graph_is_mutable(self) -> bool:
return not nx.is_frozen(self._graph.nx)
@property
def id(self) -> int:
return self._edge[EdgeConstants.I_KEY.value]
@property
def edge(self) -> typing.Tuple[int, int, int]:
return self._edge
@property
def start_id(self) -> int:
return self._edge[EdgeConstants.I_START.value]
@property
def end_id(self) -> int:
return self._edge[EdgeConstants.I_END.value]
def get_type_name(self):
return self._graph.nx.get_edge_data(*self._edge)[NX_TYPE_ATTR]
def from_vertex(self) -> Vertex:
return Vertex(self.start_id, self._graph)
def to_vertex(self) -> Vertex:
return Vertex(self.end_id, self._graph)
@property
def properties(self):
return (
(key, value)
for key, value in self._graph.nx.edges[self._edge].items()
if key not in (NX_LABEL_ATTR, NX_TYPE_ATTR)
)
def get_property(self, property_name: str):
return self._graph.nx.edges[self._edge][property_name]
def set_property(self, property_name: str, value: object):
self._graph.nx.edges[self._edge][property_name] = value
class Path:
"""Represents a path comprised of `Vertex` and `Edge` instances."""
__slots__ = ("_vertices", "_edges", "_graph")
__create_key = object()
def __init__(self, create_key, vertex_id: int, graph: Graph) -> None:
assert create_key == Path.__create_key, "Path objects must be created using Path.make_with_start"
self._vertices = [vertex_id]
self._edges = []
self._graph = graph
@classmethod
def make_with_start(cls, vertex: Vertex) -> "Path":
if not isinstance(vertex, Vertex):
raise TypeError(f"Expected 'Vertex', got '{type(vertex)}'")
if not isinstance(vertex.underlying_graph, Graph):
raise TypeError(f"Expected '_mgp_mock.Graph', got '{type(vertex.underlying_graph)}'")
if not vertex.underlying_graph.nx.has_node(vertex._id):
raise IndexError(f"Unable to find vertex with ID {vertex._id}.")
return Path(cls.__create_key, vertex._id, vertex.underlying_graph)
def is_valid(self) -> bool:
return self._graph.is_valid()
def underlying_graph_is_mutable(self) -> bool:
return not nx.is_frozen(self._graph.nx)
def expand(self, edge: Edge):
if edge.start_id != self._vertices[-1]:
raise LogicErrorError("Logic error.")
self._vertices.append(edge.end_id)
self._edges.append((edge.start_id, edge.end_id, edge.id))
def pop(self):
if not self._edges:
raise IndexError("Path contains no relationships.")
self._vertices.pop()
self._edges.pop()
def vertex_at(self, index: int) -> Vertex:
return Vertex(self._vertices[index], self._graph)
def edge_at(self, index: int) -> Edge:
return Edge(self._edges[index], self._graph)
def size(self) -> int:
return len(self._edges)

108
include/mg_exceptions.hpp Normal file
View File

@ -0,0 +1,108 @@
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#pragma once
#include <exception>
#include <iostream>
#include <sstream>
#include <string>
namespace mg_exception {
// Instead of writing this utility function, we could have used `fmt::format`, but that's not an ideal option here
// because that would introduce dependency that would be propagated to the client code (if exceptions here would be
// used). Since the functionality here is not complex + the code is not on a critical path, we opted for a pure C++
// solution.
template <typename FirstArg, typename... Args>
std::string StringSerialize(FirstArg &&firstArg, Args &&...args) {
std::stringstream stream;
stream << std::forward<FirstArg>(firstArg);
((stream << " " << args), ...);
return stream.str();
}
struct UnknownException : public std::exception {
const char *what() const noexcept override { return "Unknown exception!"; }
};
struct NotEnoughMemoryException : public std::exception {
NotEnoughMemoryException()
: message_{
StringSerialize("Not enough memory! For more details please visit", "https://memgr.ph/memory-control")} {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
struct AllocationException : public std::exception {
AllocationException()
: message_{StringSerialize("Could not allocate memory. For more details please visit",
"https://memgr.ph/memory-control")} {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
struct InsufficientBufferException : public std::exception {
const char *what() const noexcept override { return "Buffer is not sufficient to process procedure!"; }
};
struct OutOfRangeException : public std::exception {
const char *what() const noexcept override { return "Index out of range!"; }
};
struct LogicException : public std::exception {
const char *what() const noexcept override { return "Logic exception, check the procedure signature!"; }
};
struct DeletedObjectException : public std::exception {
const char *what() const noexcept override { return "Object is deleted!"; }
};
struct InvalidArgumentException : public std::exception {
const char *what() const noexcept override { return "Invalid argument!"; }
};
struct InvalidIDException : public std::exception {
InvalidIDException() : message_{"Invalid ID!"} {}
explicit InvalidIDException(std::uint64_t identifier) : message_{StringSerialize("Invalid ID =", identifier)} {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
struct KeyAlreadyExistsException : public std::exception {
KeyAlreadyExistsException() : message_{"Key you are trying to set already exists!"} {}
explicit KeyAlreadyExistsException(const std::string &key)
: message_{StringSerialize("Key you are trying to set already exists! KEY = ", key)} {}
const char *what() const noexcept override { return message_.c_str(); }
private:
std::string message_;
};
struct ImmutableObjectException : public std::exception {
const char *what() const noexcept override { return "Object you are trying to change is immutable!"; }
};
struct ValueConversionException : public std::exception {
const char *what() const noexcept override { return "Error in value conversion!"; }
};
struct SerializationException : public std::exception {
const char *what() const noexcept override { return "Error in serialization!"; }
};
} // namespace mg_exception

View File

@ -1,4 +1,4 @@
// Copyright 2022 Memgraph Ltd.
// Copyright 2024 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
@ -37,12 +37,19 @@ extern "C" {
/// All functions return an error code that can be used to figure out whether the API call was successful or not. In
/// case of failure, the specific error code can be used to identify the reason of the failure.
MGP_ENUM_CLASS MGP_NODISCARD mgp_error{
MGP_ERROR_NO_ERROR, MGP_ERROR_UNKNOWN_ERROR,
MGP_ERROR_UNABLE_TO_ALLOCATE, MGP_ERROR_INSUFFICIENT_BUFFER,
MGP_ERROR_OUT_OF_RANGE, MGP_ERROR_LOGIC_ERROR,
MGP_ERROR_DELETED_OBJECT, MGP_ERROR_INVALID_ARGUMENT,
MGP_ERROR_KEY_ALREADY_EXISTS, MGP_ERROR_IMMUTABLE_OBJECT,
MGP_ERROR_VALUE_CONVERSION, MGP_ERROR_SERIALIZATION_ERROR,
MGP_ERROR_NO_ERROR,
MGP_ERROR_UNKNOWN_ERROR,
MGP_ERROR_UNABLE_TO_ALLOCATE,
MGP_ERROR_INSUFFICIENT_BUFFER,
MGP_ERROR_OUT_OF_RANGE,
MGP_ERROR_LOGIC_ERROR,
MGP_ERROR_DELETED_OBJECT,
MGP_ERROR_INVALID_ARGUMENT,
MGP_ERROR_KEY_ALREADY_EXISTS,
MGP_ERROR_IMMUTABLE_OBJECT,
MGP_ERROR_VALUE_CONVERSION,
MGP_ERROR_SERIALIZATION_ERROR,
MGP_ERROR_AUTHORIZATION_ERROR,
};
///@}
@ -104,6 +111,22 @@ enum mgp_error mgp_global_aligned_alloc(size_t size_in_bytes, size_t alignment,
/// The behavior is undefined if `ptr` is not a value returned from a prior
/// mgp_global_alloc() or mgp_global_aligned_alloc().
void mgp_global_free(void *p);
/// State of the graph database.
struct mgp_graph;
/// Allocations are tracked only for master thread. If new threads are spawned
/// inside procedure, by calling following function
/// you can start tracking allocations for current thread too. This
/// is important if you need query memory limit to work
/// for given procedure or per procedure memory limit.
enum mgp_error mgp_track_current_thread_allocations(struct mgp_graph *graph);
/// Once allocations are tracked for current thread, you need to stop tracking allocations
/// for given thread, before thread finishes with execution, or is detached.
/// Otherwise it might result in slowdown of system due to unnecessary tracking of
/// allocations.
enum mgp_error mgp_untrack_current_thread_allocations(struct mgp_graph *graph);
///@}
/// @name Operations on mgp_value
@ -164,6 +187,8 @@ enum mgp_value_type {
MGP_VALUE_TYPE_DURATION,
};
enum mgp_error mgp_value_copy(struct mgp_value *val, struct mgp_memory *memory, struct mgp_value **result);
/// Free the memory used by the given mgp_value instance.
void mgp_value_destroy(struct mgp_value *val);
@ -399,9 +424,14 @@ enum mgp_error mgp_value_get_duration(struct mgp_value *val, struct mgp_duration
/// mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE is returned if unable to allocate a mgp_list.
enum mgp_error mgp_list_make_empty(size_t capacity, struct mgp_memory *memory, struct mgp_list **result);
enum mgp_error mgp_list_copy(struct mgp_list *list, struct mgp_memory *memory, struct mgp_list **result);
/// Free the memory used by the given mgp_list and contained elements.
void mgp_list_destroy(struct mgp_list *list);
/// Return whether the given mgp_list contains any deleted values.
enum mgp_error mgp_list_contains_deleted(struct mgp_list *list, int *result);
/// Append a copy of mgp_value to mgp_list if capacity allows.
/// The list copies the given value and therefore does not take ownership of the
/// original value. You still need to call mgp_value_destroy to free the
@ -437,9 +467,14 @@ enum mgp_error mgp_list_at(struct mgp_list *list, size_t index, struct mgp_value
/// mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE is returned if unable to allocate a mgp_map.
enum mgp_error mgp_map_make_empty(struct mgp_memory *memory, struct mgp_map **result);
enum mgp_error mgp_map_copy(struct mgp_map *map, struct mgp_memory *memory, struct mgp_map **result);
/// Free the memory used by the given mgp_map and contained items.
void mgp_map_destroy(struct mgp_map *map);
/// Return whether the given mgp_map contains any deleted values.
enum mgp_error mgp_map_contains_deleted(struct mgp_map *map, int *result);
/// Insert a new mapping from a NULL terminated character string to a value.
/// If a mapping with the same key already exists, it is *not* replaced.
/// In case of insertion, both the string and the value are copied into the map.
@ -449,6 +484,18 @@ void mgp_map_destroy(struct mgp_map *map);
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if a previous mapping already exists.
enum mgp_error mgp_map_insert(struct mgp_map *map, const char *key, struct mgp_value *value);
/// Insert a mapping from a NULL terminated character string to a value.
/// If a mapping with the same key already exists, it is replaced.
/// In case of update, both the string and the value are copied into the map.
/// Therefore, the map does not take ownership of the original key nor value, so
/// you still need to free their memory explicitly.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE is returned if unable to allocate for insertion.
enum mgp_error mgp_map_update(struct mgp_map *map, const char *key, struct mgp_value *value);
// Erase a mapping by key.
// If the key doesn't exist in the map nothing happens
enum mgp_error mgp_map_erase(struct mgp_map *map, const char *key);
/// Get the number of items stored in mgp_map.
/// Current implementation always returns without errors.
enum mgp_error mgp_map_size(struct mgp_map *map, size_t *result);
@ -457,6 +504,9 @@ enum mgp_error mgp_map_size(struct mgp_map *map, size_t *result);
/// Result is NULL if no mapping exists.
enum mgp_error mgp_map_at(struct mgp_map *map, const char *key, struct mgp_value **result);
/// Returns true if key in map.
enum mgp_error mgp_key_exists(struct mgp_map *map, const char *key, int *result);
/// An item in the mgp_map.
struct mgp_map_item;
@ -508,6 +558,9 @@ enum mgp_error mgp_path_copy(struct mgp_path *path, struct mgp_memory *memory, s
/// Free the memory used by the given mgp_path and contained vertices and edges.
void mgp_path_destroy(struct mgp_path *path);
/// Return whether the given mgp_path contains any deleted values.
enum mgp_error mgp_path_contains_deleted(struct mgp_path *path, int *result);
/// Append an edge continuing from the last vertex on the path.
/// The edge is copied into the path. Therefore, the path does not take
/// ownership of the original edge, so you still need to free the edge memory
@ -518,6 +571,10 @@ void mgp_path_destroy(struct mgp_path *path);
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for path extension.
enum mgp_error mgp_path_expand(struct mgp_path *path, struct mgp_edge *edge);
/// Remove the last node and the last relationship from the path.
/// Return mgp_error::MGP_ERROR_OUT_OF_RANGE if the path contains no relationships.
enum mgp_error mgp_path_pop(struct mgp_path *path);
/// Get the number of edges in a mgp_path.
/// Current implementation always returns without errors.
enum mgp_error mgp_path_size(struct mgp_path *path, size_t *result);
@ -622,6 +679,12 @@ struct mgp_vertex_id {
/// Get the ID of given vertex.
enum mgp_error mgp_vertex_get_id(struct mgp_vertex *v, struct mgp_vertex_id *result);
/// Get the in degree of given vertex.
enum mgp_error mgp_vertex_get_in_degree(struct mgp_vertex *v, size_t *result);
/// Get the out degree of given vertex.
enum mgp_error mgp_vertex_get_out_degree(struct mgp_vertex *v, size_t *result);
/// Result is non-zero if the vertex can be modified.
/// The mutability of the vertex is the same as the graph which it is part of. If a vertex is immutable, then edges
/// cannot be created or deleted, properties and labels cannot be set or removed and all of the returned edges will be
@ -639,6 +702,15 @@ enum mgp_error mgp_vertex_underlying_graph_is_mutable(struct mgp_vertex *v, int
enum mgp_error mgp_vertex_set_property(struct mgp_vertex *v, const char *property_name,
struct mgp_value *property_value);
/// Set the value of properties on a vertex.
/// When the value is `null`, then the property is removed from the vertex.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for storing the property.
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `v` is immutable.
/// Return mgp_error::MGP_ERROR_DELETED_OBJECT if `v` has been deleted.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `v` has been modified by another transaction.
/// Return mgp_error::MGP_ERROR_VALUE_CONVERSION if `property_value` is vertex, edge or path.
enum mgp_error mgp_vertex_set_properties(struct mgp_vertex *v, struct mgp_map *properties);
/// Add the label to the vertex.
/// If the vertex already has the label, this function does nothing.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for storing the label.
@ -662,6 +734,9 @@ enum mgp_error mgp_vertex_copy(struct mgp_vertex *v, struct mgp_memory *memory,
/// Free the memory used by a mgp_vertex.
void mgp_vertex_destroy(struct mgp_vertex *v);
/// Return whether the given mgp_vertex is deleted.
enum mgp_error mgp_vertex_is_deleted(struct mgp_vertex *v, int *result);
/// Result is non-zero if given vertices are equal, otherwise 0.
enum mgp_error mgp_vertex_equal(struct mgp_vertex *v1, struct mgp_vertex *v2, int *result);
@ -756,6 +831,9 @@ enum mgp_error mgp_edge_copy(struct mgp_edge *e, struct mgp_memory *memory, stru
/// Free the memory used by a mgp_edge.
void mgp_edge_destroy(struct mgp_edge *e);
/// Return whether the given mgp_edge is deleted.
enum mgp_error mgp_edge_is_deleted(struct mgp_edge *e, int *result);
/// Result is non-zero if given edges are equal, otherwise 0.
enum mgp_error mgp_edge_equal(struct mgp_edge *e1, struct mgp_edge *e2, int *result);
@ -789,6 +867,15 @@ enum mgp_error mgp_edge_get_property(struct mgp_edge *e, const char *property_na
/// Return mgp_error::MGP_ERROR_VALUE_CONVERSION if `property_value` is vertex, edge or path.
enum mgp_error mgp_edge_set_property(struct mgp_edge *e, const char *property_name, struct mgp_value *property_value);
/// Set the value of properties on a vertex.
/// When the value is `null`, then the property is removed from the vertex.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for storing the property.
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `v` is immutable.
/// Return mgp_error::MGP_ERROR_DELETED_OBJECT if `v` has been deleted.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `v` has been modified by another transaction.
/// Return mgp_error::MGP_ERROR_VALUE_CONVERSION if `property_value` is vertex, edge or path.
enum mgp_error mgp_edge_set_properties(struct mgp_edge *e, struct mgp_map *properties);
/// Start iterating over properties stored in the given edge.
/// The properties of the edge are copied when the iterator is created, therefore later changes won't affect them.
/// Resulting mgp_properties_iterator needs to be deallocated with
@ -798,21 +885,113 @@ enum mgp_error mgp_edge_set_property(struct mgp_edge *e, const char *property_na
enum mgp_error mgp_edge_iter_properties(struct mgp_edge *e, struct mgp_memory *memory,
struct mgp_properties_iterator **result);
/// State of the graph database.
struct mgp_graph;
/// Get the vertex corresponding to given ID, or NULL if no such vertex exists.
/// Resulting vertex must be freed using mgp_vertex_destroy.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate the vertex.
enum mgp_error mgp_graph_get_vertex_by_id(struct mgp_graph *g, struct mgp_vertex_id id, struct mgp_memory *memory,
struct mgp_vertex **result);
/// Result is non-zero if the index with the given name exists.
/// The current implementation always returns without errors.
enum mgp_error mgp_graph_has_text_index(struct mgp_graph *graph, const char *index_name, int *result);
/// Available modes of searching text indices.
MGP_ENUM_CLASS text_search_mode{
SPECIFIED_PROPERTIES,
REGEX,
ALL_PROPERTIES,
};
/// Search the named text index for the given query. The result is a map with the "search_results" and "error_msg" keys.
/// The "search_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "search_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_search_text_index(struct mgp_graph *graph, const char *index_name, const char *search_query,
enum text_search_mode search_mode, struct mgp_memory *memory,
struct mgp_map **result);
/// Aggregate over the results of a search over the named text index. The result is a map with the "aggregation_results"
/// and "error_msg" keys.
/// The "aggregation_results" key contains the vertices whose text-indexed properties match the given query.
/// In case of a Tantivy error, the "aggregation_results" key is absent, and "error_msg" contains the error message.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if theres an allocation error while constructing the results map.
/// Return mgp_error::MGP_ERROR_KEY_ALREADY_EXISTS if the same key is being created in the results map more than once.
enum mgp_error mgp_graph_aggregate_over_text_index(struct mgp_graph *graph, const char *index_name,
const char *search_query, const char *aggregation_query,
struct mgp_memory *memory, struct mgp_map **result);
/// Creates label index for given label.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if label index already exists, result will be 0, otherwise 1.
enum mgp_error mgp_create_label_index(struct mgp_graph *graph, const char *label, int *result);
/// Drop label index.
enum mgp_error mgp_drop_label_index(struct mgp_graph *graph, const char *label, int *result);
/// List all label indices.
enum mgp_error mgp_list_all_label_indices(struct mgp_graph *graph, struct mgp_memory *memory, struct mgp_list **result);
/// Creates label-property index for given label and propery.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if label property index already exists, result will be 0, otherwise 1.
enum mgp_error mgp_create_label_property_index(struct mgp_graph *graph, const char *label, const char *property,
int *result);
/// Drops label-property index for given label and propery.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if dropping label property index failed, result will be 0, otherwise 1.
enum mgp_error mgp_drop_label_property_index(struct mgp_graph *graph, const char *label, const char *property,
int *result);
/// List all label+property indices.
enum mgp_error mgp_list_all_label_property_indices(struct mgp_graph *graph, struct mgp_memory *memory,
struct mgp_list **result);
/// Creates existence constraint for given label and property.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if creating existence constraint failed, result will be 0, otherwise 1.
enum mgp_error mgp_create_existence_constraint(struct mgp_graph *graph, const char *label, const char *property,
int *result);
/// Drops existence constraint for given label and property.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if dropping existence constraint failed, result will be 0, otherwise 1.
enum mgp_error mgp_drop_existence_constraint(struct mgp_graph *graph, const char *label, const char *property,
int *result);
/// List all existence constraints.
enum mgp_error mgp_list_all_existence_constraints(struct mgp_graph *graph, struct mgp_memory *memory,
struct mgp_list **result);
/// Creates unique constraint for given label and properties.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if creating unique constraint failed, result will be 0, otherwise 1.
enum mgp_error mgp_create_unique_constraint(struct mgp_graph *graph, const char *label, struct mgp_value *properties,
int *result);
/// Drops unique constraint for given label and properties.
/// mgp_error::MGP_ERROR_NO_ERROR is always returned.
/// if dropping unique constraint failed, result will be 0, otherwise 1.
enum mgp_error mgp_drop_unique_constraint(struct mgp_graph *graph, const char *label, struct mgp_value *properties,
int *result);
/// List all unique constraints
enum mgp_error mgp_list_all_unique_constraints(struct mgp_graph *graph, struct mgp_memory *memory,
struct mgp_list **result);
/// Result is non-zero if the graph can be modified.
/// If a graph is immutable, then vertices cannot be created or deleted, and all of the returned vertices will be
/// immutable also. The same applies for edges.
/// Current implementation always returns without errors.
enum mgp_error mgp_graph_is_mutable(struct mgp_graph *graph, int *result);
/// Result is non-zero if the graph is in transactional storage mode.
/// If a graph is not in transactional mode (i.e. analytical mode), then vertices and edges can be missing
/// because changes from other transactions are visible.
/// Current implementation always returns without errors.
enum mgp_error mgp_graph_is_transactional(struct mgp_graph *graph, int *result);
/// Add a new vertex to the graph.
/// Resulting vertex must be freed using mgp_vertex_destroy.
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `graph` is immutable.
@ -839,6 +1018,29 @@ enum mgp_error mgp_graph_detach_delete_vertex(struct mgp_graph *graph, struct mg
enum mgp_error mgp_graph_create_edge(struct mgp_graph *graph, struct mgp_vertex *from, struct mgp_vertex *to,
struct mgp_edge_type type, struct mgp_memory *memory, struct mgp_edge **result);
/// Change edge from vertex
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `graph` is immutable.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate a mgp_edge.
/// Return mgp_error::MGP_ERROR_DELETED_OBJECT if `from` or `to` has been deleted.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `from` or `to` has been modified by another transaction.
enum mgp_error mgp_graph_edge_set_from(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_vertex *new_from,
struct mgp_memory *memory, struct mgp_edge **result);
/// Change edge to vertex
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `graph` is immutable.
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate a mgp_edge.
/// Return mgp_error::MGP_ERROR_DELETED_OBJECT if `from` or `to` has been deleted.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `from` or `to` has been modified by another transaction.
enum mgp_error mgp_graph_edge_set_to(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_vertex *new_to,
struct mgp_memory *memory, struct mgp_edge **result);
/// Change edge type
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `graph` is immutable.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `edge`, its source or destination vertex has been modified by
/// another transaction.
enum mgp_error mgp_graph_edge_change_type(struct mgp_graph *graph, struct mgp_edge *e, struct mgp_edge_type new_type,
struct mgp_memory *memory, struct mgp_edge **result);
/// Delete an edge from the graph.
/// Return mgp_error::MGP_ERROR_IMMUTABLE_OBJECT if `graph` is immutable.
/// Return mgp_error::MGP_ERROR_SERIALIZATION_ERROR if `edge`, its source or destination vertex has been modified by
@ -1292,6 +1494,12 @@ struct mgp_proc;
/// Describes a Memgraph magic function.
struct mgp_func;
/// All available log levels that can be used in mgp_log function
MGP_ENUM_CLASS mgp_log_level{
MGP_LOG_LEVEL_TRACE, MGP_LOG_LEVEL_DEBUG, MGP_LOG_LEVEL_INFO,
MGP_LOG_LEVEL_WARN, MGP_LOG_LEVEL_ERROR, MGP_LOG_LEVEL_CRITICAL,
};
/// Entry-point for a query module read procedure, invoked through openCypher.
///
/// Passed in arguments will not live longer than the callback's execution.
@ -1299,6 +1507,13 @@ struct mgp_func;
/// to allocate global resources.
typedef void (*mgp_proc_cb)(struct mgp_list *, struct mgp_graph *, struct mgp_result *, struct mgp_memory *);
/// Cleanup for a query module read procedure. Can't be invoked through OpenCypher. Cleans batched stream.
typedef void (*mgp_proc_cleanup)();
/// Initializer for a query module batched read procedure. Can't be invoked through OpenCypher. Initializes batched
/// stream.
typedef void (*mgp_proc_initializer)(struct mgp_list *, struct mgp_graph *, struct mgp_memory *);
/// Register a read-only procedure to a module.
///
/// The `name` must be a sequence of digits, underscores, lowercase and
@ -1323,6 +1538,30 @@ enum mgp_error mgp_module_add_read_procedure(struct mgp_module *module, const ch
enum mgp_error mgp_module_add_write_procedure(struct mgp_module *module, const char *name, mgp_proc_cb cb,
struct mgp_proc **result);
/// Register a readable batched procedure to a module.
///
/// The `name` must be a valid identifier, following the same rules as the
/// procedure`name` in mgp_module_add_read_procedure.
///
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for mgp_proc.
/// Return mgp_error::MGP_ERROR_INVALID_ARGUMENT if `name` is not a valid procedure name.
/// RETURN mgp_error::MGP_ERROR_LOGIC_ERROR if a procedure with the same name was already registered.
enum mgp_error mgp_module_add_batch_read_procedure(struct mgp_module *module, const char *name, mgp_proc_cb cb,
mgp_proc_initializer initializer, mgp_proc_cleanup cleanup,
struct mgp_proc **result);
/// Register a writeable batched procedure to a module.
///
/// The `name` must be a valid identifier, following the same rules as the
/// procedure`name` in mgp_module_add_read_procedure.
///
/// Return mgp_error::MGP_ERROR_UNABLE_TO_ALLOCATE if unable to allocate memory for mgp_proc.
/// Return mgp_error::MGP_ERROR_INVALID_ARGUMENT if `name` is not a valid procedure name.
/// RETURN mgp_error::MGP_ERROR_LOGIC_ERROR if a procedure with the same name was already registered.
enum mgp_error mgp_module_add_batch_write_procedure(struct mgp_module *module, const char *name, mgp_proc_cb cb,
mgp_proc_initializer initializer, mgp_proc_cleanup cleanup,
struct mgp_proc **result);
/// Add a required argument to a procedure.
///
/// The order of adding arguments will correspond to the order the procedure
@ -1386,6 +1625,9 @@ enum mgp_error mgp_proc_add_result(struct mgp_proc *proc, const char *name, stru
/// Return mgp_error::MGP_ERROR_INVALID_ARGUMENT if `name` is not a valid result name.
/// RETURN mgp_error::MGP_ERROR_LOGIC_ERROR if a result field with the same name was already added.
enum mgp_error mgp_proc_add_deprecated_result(struct mgp_proc *proc, const char *name, struct mgp_type *type);
/// Log a message on a certain level.
enum mgp_error mgp_log(enum mgp_log_level log_level, const char *output);
///@}
/// @name Execution
@ -1395,7 +1637,10 @@ enum mgp_error mgp_proc_add_deprecated_result(struct mgp_proc *proc, const char
/// @{
/// Return non-zero if the currently executing procedure should abort as soon as
/// possible.
/// possible. If non-zero the reasons are:
/// (1) The transaction was requested to be terminated
/// (2) The server is gracefully shutting down
/// (3) The transaction has hit its timeout threshold
///
/// Procedures which perform heavyweight processing run the risk of running too
/// long and going over the query execution time limit. To prevent this, such
@ -1512,6 +1757,10 @@ enum mgp_error mgp_module_add_transformation(struct mgp_module *module, const ch
///
///@{
/// State of the database that is exposed to magic functions. Currently it is unused, but it enables extending the
/// functionalities of magic functions in future without breaking the API.
struct mgp_func_context;
/// Add a required argument to a function.
///
/// The order of the added arguments corresponds to the signature of the openCypher function.

4591
include/mgp.hpp Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

1674
include/mgp_mock.py Normal file

File diff suppressed because it is too large Load Diff

103
init
View File

@ -5,13 +5,16 @@ cd "$DIR"
source "$DIR/environment/util.sh"
DISTRO=$(operating_system)
ARCHITECTURE=$(architecture)
function print_help () {
echo "Usage: $0 [OPTION]"
echo -e "Check for missing packages and setup the project.\n"
echo "Optional arguments:"
echo -e " -h\tdisplay this help and exit"
echo -e " --without-libs-setup\tskip the step for setting up libs"
echo -e " --wsl-quicklisp-proxy \"host:port\"\tquicklist HTTP proxy (this flag + HTTP proxy are required on WSL)"
echo -e " --ci\tscript is being run inside ci"
}
function setup_virtualenv () {
@ -32,28 +35,22 @@ function setup_virtualenv () {
popd > /dev/null
}
wsl_quicklisp_proxy=""
setup_libs=true
ci=false
if [[ $# -eq 1 && "$1" == "-h" ]]; then
print_help
exit 0
else
while(($#)); do
case "$1" in
--wsl-quicklisp-proxy)
shift
if [[ $# -eq 0 ]]; then
echo "Missing proxy URL"
print_help
exit 1
fi
wsl_quicklisp_proxy=":proxy \"http://$1/\""
shift
;;
--without-libs-setup)
shift
setup_libs=false
;;
--ci)
shift
ci=true
;;
*)
# unknown option
echo "Invalid argument provided: $1"
@ -64,8 +61,6 @@ else
done
fi
DISTRO=$(operating_system)
ARCHITECTURE=$(architecture)
if [ "${ARCHITECTURE}" = "arm64" ] || [ "${ARCHITECTURE}" = "aarch64" ]; then
OS_SCRIPT=$DIR/environment/os/$DISTRO-arm.sh
else
@ -78,37 +73,22 @@ echo "All packages are in-place..."
# create a default build directory
mkdir -p ./build
# quicklisp package manager for Common Lisp
quicklisp_install_dir="$HOME/quicklisp"
if [[ -v QUICKLISP_HOME ]]; then
quicklisp_install_dir="${QUICKLISP_HOME}"
fi
if [[ ! -f "${quicklisp_install_dir}/setup.lisp" ]]; then
wget -nv https://beta.quicklisp.org/quicklisp.lisp -O quicklisp.lisp || exit 1
echo \
"
(load \"${DIR}/quicklisp.lisp\")
(quicklisp-quickstart:install $wsl_quicklisp_proxy :path \"${quicklisp_install_dir}\")
" | sbcl --script || exit 1
rm -rf quicklisp.lisp || exit 1
fi
ln -Tfs "$DIR/src/lisp" "${quicklisp_install_dir}/local-projects/lcp"
# Install LCP dependencies
# TODO: We should at some point cache or have a mirror of packages we use.
# TODO: move the installation of LCP's dependencies into ./setup.sh
echo \
"
(load \"${quicklisp_install_dir}/setup.lisp\")
(ql:quickload '(:lcp :lcp/test) :silent t)
" | sbcl --script
if [[ "$setup_libs" == "true" ]]; then
# Setup libs (download).
cd libs
./cleanup.sh
./setup.sh
cd ..
# Setup libs (download).
cd libs
./cleanup.sh
./setup.sh
cd ..
fi
# Fix for centos 7 during release
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" = "centos-7" ] || [ "${DISTRO}" = "debian-11" ] || [ "${DISTRO}" = "amzn-2" ]; then
if python3 -m pip show virtualenv >/dev/null 2>/dev/null; then
python3 -m pip uninstall -y virtualenv
fi
python3 -m pip install virtualenv
fi
fi
# setup gql_behave dependencies
@ -121,6 +101,10 @@ setup_virtualenv tests/stress
setup_virtualenv tests/integration/ldap
# Setup tests dependencies.
# NOTE: This is commented out because of the build order (at the time of
# execution mgclient is not built yet) which makes this setup to fail. mgclient
# is built during the make phase. The tests/setup.sh is called under GHA CI
# jobs.
# cd tests
# ./setup.sh
# cd ..
@ -130,15 +114,30 @@ setup_virtualenv tests/integration/ldap
echo "Done installing dependencies for Memgraph"
echo "Linking git hooks"
for hook in $(find $DIR/.githooks -type f -printf "%f\n"); do
ln -s -f "$DIR/.githooks/$hook" "$DIR/.git/hooks/$hook"
echo "Added $hook hook"
done;
echo "Linking git hooks OR skip if .git folder is not there"
if [ -d "$DIR/.git" ]; then
for hook in $(find $DIR/.githooks -type f -printf "%f\n"); do
ln -s -f "$DIR/.githooks/$hook" "$DIR/.git/hooks/$hook"
echo "Added $hook hook"
done;
else
echo "WARNING: .git folder not present, skip adding hooks"
fi
# Install precommit hook
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install precommit hook except on old operating systems because we don't
# develop on them -> pre-commit hook not required -> we can use latest
# packages.
if [[ "$ci" == "false" ]]; then
if [ "${DISTRO}" != "centos-7" ] && [ "$DISTRO" != "debian-10" ] && [ "${DISTRO}" != "ubuntu-18.04" ] && [ "${DISTRO}" != "amzn-2" ]; then
python3 -m pip install pre-commit
python3 -m pre_commit install
# Install py format tools for usage during the development.
echo "Install black formatter"
python3 -m pip install black==23.1.*
echo "Install isort"
python3 -m pip install isort==5.12.*
fi
fi
# Link `include/mgp.py` with `release/mgp/mgp.py`
ln -v -f include/mgp.py release/mgp/mgp.py

2
libs/.gitignore vendored
View File

@ -6,3 +6,5 @@
!__main.cpp
!pulsar.patch
!antlr4.10.1.patch
!rocksdb8.1.1.patch
!nuraft2.1.0.patch

View File

@ -4,7 +4,8 @@ include(GNUInstallDirs)
include(ProcessorCount)
ProcessorCount(NPROC)
if (NPROC EQUAL 0)
if(NPROC EQUAL 0)
set(NPROC 1)
endif()
@ -12,9 +13,10 @@ find_package(Boost 1.78 REQUIRED)
find_package(BZip2 1.0.6 REQUIRED)
find_package(Threads REQUIRED)
set(GFLAGS_NOTHREADS OFF)
# NOTE: config/generate.py depends on the gflags help XML format.
find_package(gflags REQUIRED)
find_package(fmt 8.0.1)
find_package(Jemalloc REQUIRED)
find_package(fmt 8.0.1 REQUIRED)
find_package(ZLIB 1.2.11 REQUIRED)
set(LIB_DIR ${CMAKE_CURRENT_SOURCE_DIR})
@ -23,23 +25,27 @@ set(LIB_DIR ${CMAKE_CURRENT_SOURCE_DIR})
function(import_header_library name include_dir)
add_library(${name} INTERFACE IMPORTED GLOBAL)
set_property(TARGET ${name} PROPERTY
INTERFACE_INCLUDE_DIRECTORIES ${include_dir})
INTERFACE_INCLUDE_DIRECTORIES ${include_dir})
string(TOUPPER ${name} _upper_name)
set(${_upper_name}_INCLUDE_DIR ${include_dir} CACHE FILEPATH
"Path to ${name} include directory" FORCE)
"Path to ${name} include directory" FORCE)
mark_as_advanced(${_upper_name}_INCLUDE_DIR)
add_library(lib::${name} ALIAS ${name})
endfunction(import_header_library)
function(import_library name type location include_dir)
add_library(${name} ${type} IMPORTED GLOBAL)
if (${ARGN})
if(${ARGN})
# Optional argument is the name of the external project that we need to
# depend on.
add_dependencies(${name} ${ARGN0})
else()
add_dependencies(${name} ${name}-proj)
endif()
set_property(TARGET ${name} PROPERTY IMPORTED_LOCATION ${location})
# We need to create the include directory first in order to be able to add it
# as an include directory. The header files in the include directory will be
# generated later during the build process.
@ -59,29 +65,34 @@ function(add_external_project name)
set(options NO_C_COMPILER)
set(one_value_kwargs SOURCE_DIR BUILD_IN_SOURCE)
set(multi_value_kwargs CMAKE_ARGS DEPENDS INSTALL_COMMAND BUILD_COMMAND
CONFIGURE_COMMAND)
CONFIGURE_COMMAND)
cmake_parse_arguments(KW "${options}" "${one_value_kwargs}" "${multi_value_kwargs}" ${ARGN})
set(source_dir ${CMAKE_CURRENT_SOURCE_DIR}/${name})
if (KW_SOURCE_DIR)
if(KW_SOURCE_DIR)
set(source_dir ${KW_SOURCE_DIR})
endif()
set(build_in_source 0)
if (KW_BUILD_IN_SOURCE)
if(KW_BUILD_IN_SOURCE)
set(build_in_source ${KW_BUILD_IN_SOURCE})
endif()
if (NOT KW_NO_C_COMPILER)
if(NOT KW_NO_C_COMPILER)
set(KW_CMAKE_ARGS -DCMAKE_C_COMPILER=${CMAKE_C_COMPILER} ${KW_CMAKE_ARGS})
endif()
ExternalProject_Add(${name}-proj DEPENDS ${KW_DEPENDS}
PREFIX ${source_dir} SOURCE_DIR ${source_dir}
BUILD_IN_SOURCE ${build_in_source}
CONFIGURE_COMMAND ${KW_CONFIGURE_COMMAND}
CMAKE_ARGS -DCMAKE_BUILD_TYPE=Release
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_INSTALL_PREFIX=${source_dir}
${KW_CMAKE_ARGS}
INSTALL_COMMAND ${KW_INSTALL_COMMAND}
BUILD_COMMAND ${KW_BUILD_COMMAND})
PREFIX ${source_dir} SOURCE_DIR ${source_dir}
BUILD_IN_SOURCE ${build_in_source}
CONFIGURE_COMMAND ${KW_CONFIGURE_COMMAND}
CMAKE_ARGS -DCMAKE_BUILD_TYPE=Release
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_INSTALL_PREFIX=${source_dir}
${KW_CMAKE_ARGS}
INSTALL_COMMAND ${KW_INSTALL_COMMAND}
BUILD_COMMAND ${KW_BUILD_COMMAND})
endfunction(add_external_project)
# Calls `add_external_project`, sets NAME_LIBRARY, NAME_INCLUDE_DIR variables
@ -90,23 +101,34 @@ macro(import_external_library name type library_location include_dir)
add_external_project(${name} ${ARGN})
string(TOUPPER ${name} _upper_name)
set(${_upper_name}_LIBRARY ${library_location} CACHE FILEPATH
"Path to ${name} library" FORCE)
"Path to ${name} library" FORCE)
set(${_upper_name}_INCLUDE_DIR ${include_dir} CACHE FILEPATH
"Path to ${name} include directory" FORCE)
"Path to ${name} include directory" FORCE)
mark_as_advanced(${_upper_name}_LIBRARY ${_upper_name}_INCLUDE_DIR)
import_library(${name} ${type} ${${_upper_name}_LIBRARY} ${${_upper_name}_INCLUDE_DIR})
endmacro(import_external_library)
macro(set_path_external_library name type library_location include_dir)
string(TOUPPER ${name} _upper_name)
set(${_upper_name}_LIBRARY ${library_location} CACHE FILEPATH
"Path to ${name} library" FORCE)
set(${_upper_name}_INCLUDE_DIR ${include_dir} CACHE FILEPATH
"Path to ${name} include directory" FORCE)
mark_as_advanced(${name}_LIBRARY ${name}_INCLUDE_DIR)
endmacro(set_path_external_library)
# setup antlr
import_external_library(antlr4 STATIC
${CMAKE_CURRENT_SOURCE_DIR}/antlr4/runtime/Cpp/lib/libantlr4-runtime.a
${CMAKE_CURRENT_SOURCE_DIR}/antlr4/runtime/Cpp/include/antlr4-runtime
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/antlr4/runtime/Cpp
CMAKE_ARGS # http://stackoverflow.com/questions/37096062/get-a-basic-c-program-to-compile-using-clang-on-ubuntu-16/38385967#38385967
-DWITH_LIBCXX=OFF # because of debian bug
-DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=true
-DCMAKE_CXX_STANDARD=20
-DANTLR_BUILD_CPP_TESTS=OFF
-DWITH_LIBCXX=OFF # because of debian bug
-DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=true
-DCMAKE_CXX_STANDARD=20
-DANTLR_BUILD_CPP_TESTS=OFF
BUILD_COMMAND $(MAKE) antlr4_static
INSTALL_COMMAND $(MAKE) install)
@ -114,6 +136,7 @@ import_external_library(antlr4 STATIC
import_external_library(benchmark STATIC
${CMAKE_CURRENT_SOURCE_DIR}/benchmark/${CMAKE_INSTALL_LIBDIR}/libbenchmark.a
${CMAKE_CURRENT_SOURCE_DIR}/benchmark/include
# Skip testing. The tests don't compile with Clang 8.
CMAKE_ARGS -DBENCHMARK_ENABLE_TESTING=OFF)
@ -129,15 +152,15 @@ add_subdirectory(rapidcheck EXCLUDE_FROM_ALL)
# setup google test
add_external_project(gtest SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/googletest)
set(GTEST_INCLUDE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/googletest/include
CACHE PATH "Path to gtest and gmock include directory" FORCE)
CACHE PATH "Path to gtest and gmock include directory" FORCE)
set(GMOCK_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/googletest/lib/libgmock.a
CACHE FILEPATH "Path to gmock library" FORCE)
CACHE FILEPATH "Path to gmock library" FORCE)
set(GMOCK_MAIN_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/googletest/lib/libgmock_main.a
CACHE FILEPATH "Path to gmock_main library" FORCE)
CACHE FILEPATH "Path to gmock_main library" FORCE)
set(GTEST_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/googletest/lib/libgtest.a
CACHE FILEPATH "Path to gtest library" FORCE)
CACHE FILEPATH "Path to gtest library" FORCE)
set(GTEST_MAIN_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/googletest/lib/libgtest_main.a
CACHE FILEPATH "Path to gtest_main library" FORCE)
CACHE FILEPATH "Path to gtest_main library" FORCE)
mark_as_advanced(GTEST_INCLUDE_DIR GMOCK_LIBRARY GMOCK_MAIN_LIBRARY GTEST_LIBRARY GTEST_MAIN_LIBRARY)
import_library(gtest STATIC ${GTEST_LIBRARY} ${GTEST_INCLUDE_DIR} gtest-proj)
import_library(gtest_main STATIC ${GTEST_MAIN_LIBRARY} ${GTEST_INCLUDE_DIR} gtest-proj)
@ -155,10 +178,10 @@ import_external_library(rocksdb STATIC
${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/lib/librocksdb.a
${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include
CMAKE_ARGS -DUSE_RTTI=ON
-DWITH_TESTS=OFF
-DGFLAGS_NOTHREADS=OFF
-DCMAKE_INSTALL_LIBDIR=lib
-DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=true
-DWITH_TESTS=OFF
-DGFLAGS_NOTHREADS=OFF
-DCMAKE_INSTALL_LIBDIR=lib
-DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=true
BUILD_COMMAND $(MAKE) rocksdb)
# Setup libbcrypt
@ -167,8 +190,8 @@ import_external_library(libbcrypt STATIC
${CMAKE_CURRENT_SOURCE_DIR}/libbcrypt
CONFIGURE_COMMAND sed s/-Wcast-align// -i ${CMAKE_CURRENT_SOURCE_DIR}/libbcrypt/crypt_blowfish/Makefile
BUILD_COMMAND make -C ${CMAKE_CURRENT_SOURCE_DIR}/libbcrypt
CC=${CMAKE_C_COMPILER}
CXX=${CMAKE_CXX_COMPILER}
CC=${CMAKE_C_COMPILER}
CXX=${CMAKE_CXX_COMPILER}
INSTALL_COMMAND true)
# Setup mgclient
@ -176,16 +199,16 @@ import_external_library(mgclient STATIC
${CMAKE_CURRENT_SOURCE_DIR}/mgclient/lib/libmgclient.a
${CMAKE_CURRENT_SOURCE_DIR}/mgclient/include
CMAKE_ARGS -DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DBUILD_TESTING=OFF
-DBUILD_CPP_BINDINGS=ON)
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DBUILD_TESTING=OFF
-DBUILD_CPP_BINDINGS=ON)
find_package(OpenSSL REQUIRED)
target_link_libraries(mgclient INTERFACE ${OPENSSL_LIBRARIES})
add_external_project(mgconsole
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/mgconsole
CMAKE_ARGS
-DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_BINARY_DIR}
-DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_BINARY_DIR}
BUILD_COMMAND $(MAKE) mgconsole)
add_custom_target(mgconsole DEPENDS mgconsole-proj)
@ -202,14 +225,15 @@ import_external_library(librdkafka STATIC
${CMAKE_CURRENT_SOURCE_DIR}/librdkafka/lib/librdkafka.a
${CMAKE_CURRENT_SOURCE_DIR}/librdkafka/include
CMAKE_ARGS -DRDKAFKA_BUILD_STATIC=ON
-DRDKAFKA_BUILD_EXAMPLES=OFF
-DRDKAFKA_BUILD_TESTS=OFF
-DWITH_ZSTD=OFF
-DENABLE_LZ4_EXT=OFF
-DCMAKE_INSTALL_LIBDIR=lib
-DWITH_SSL=ON
# If we want SASL, we need to install it on build machines
-DWITH_SASL=OFF)
-DRDKAFKA_BUILD_EXAMPLES=OFF
-DRDKAFKA_BUILD_TESTS=OFF
-DWITH_ZSTD=OFF
-DENABLE_LZ4_EXT=OFF
-DCMAKE_INSTALL_LIBDIR=lib
-DWITH_SSL=ON
# If we want SASL, we need to install it on build machines
-DWITH_SASL=OFF)
target_link_libraries(librdkafka INTERFACE ${OPENSSL_LIBRARIES} ZLIB::ZLIB)
import_library(librdkafka++ STATIC
@ -230,24 +254,24 @@ import_external_library(pulsar STATIC
${CMAKE_CURRENT_SOURCE_DIR}/pulsar/install/include
BUILD_IN_SOURCE 1
CONFIGURE_COMMAND cmake pulsar-client-cpp
-DCMAKE_INSTALL_PREFIX=${CMAKE_CURRENT_SOURCE_DIR}/pulsar/install
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DBUILD_DYNAMIC_LIB=OFF
-DBUILD_STATIC_LIB=ON
-DBUILD_TESTS=OFF
-DLINK_STATIC=ON
-DPROTOC_PATH=${PROTOBUF_ROOT}/bin/protoc
-DBOOST_ROOT=${BOOST_ROOT}
-DCMAKE_PREFIX_PATH=${PROTOBUF_ROOT}
-DProtobuf_INCLUDE_DIRS=${PROTOBUF_ROOT}/include
-DBUILD_PYTHON_WRAPPER=OFF
-DBUILD_PERF_TOOLS=OFF
-DUSE_LOG4CXX=OFF
BUILD_COMMAND $(MAKE) pulsarStaticWithDeps)
-DCMAKE_INSTALL_PREFIX=${CMAKE_CURRENT_SOURCE_DIR}/pulsar/install
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DBUILD_DYNAMIC_LIB=OFF
-DBUILD_STATIC_LIB=ON
-DBUILD_TESTS=OFF
-DLINK_STATIC=ON
-DPROTOC_PATH=${PROTOBUF_ROOT}/bin/protoc
-DBOOST_ROOT=${BOOST_ROOT}
-DCMAKE_PREFIX_PATH=${PROTOBUF_ROOT}
-DProtobuf_INCLUDE_DIRS=${PROTOBUF_ROOT}/include
-DBUILD_PYTHON_WRAPPER=OFF
-DBUILD_PERF_TOOLS=OFF
-DUSE_LOG4CXX=OFF
BUILD_COMMAND $(MAKE) pulsarStaticWithDeps)
add_dependencies(pulsar-proj protobuf)
if (${MG_ARCH} STREQUAL "ARM64")
if(${MG_ARCH} STREQUAL "ARM64")
set(MG_LIBRDTSC_CMAKE_ARGS -DLIBRDTSC_ARCH_x86=OFF -DLIBRDTSC_ARCH_ARM64=ON)
endif()
@ -256,3 +280,52 @@ import_external_library(librdtsc STATIC
${CMAKE_CURRENT_SOURCE_DIR}/librdtsc/include
CMAKE_ARGS ${MG_LIBRDTSC_CMAKE_ARGS}
BUILD_COMMAND $(MAKE) rdtsc)
# setup ctre
import_header_library(ctre ${CMAKE_CURRENT_SOURCE_DIR})
# setup absl (cmake sub_directory tolerant)
set(ABSL_PROPAGATE_CXX_STD ON)
add_subdirectory(absl EXCLUDE_FROM_ALL)
# set Jemalloc
set_path_external_library(jemalloc STATIC
${CMAKE_CURRENT_SOURCE_DIR}/jemalloc/lib/libjemalloc.a
${CMAKE_CURRENT_SOURCE_DIR}/jemalloc/include/)
import_header_library(rangev3 ${CMAKE_CURRENT_SOURCE_DIR}/rangev3/include)
ExternalProject_Add(mgcxx-proj
PREFIX mgcxx-proj
GIT_REPOSITORY https://github.com/memgraph/mgcxx
GIT_TAG "v0.0.4"
CMAKE_ARGS
"-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR>"
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
"-DENABLE_TESTS=OFF"
"-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}"
"-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}"
INSTALL_DIR "${PROJECT_BINARY_DIR}/mgcxx"
)
ExternalProject_Get_Property(mgcxx-proj install_dir)
set(MGCXX_ROOT ${install_dir})
add_library(tantivy_text_search STATIC IMPORTED GLOBAL)
add_dependencies(tantivy_text_search mgcxx-proj)
set_property(TARGET tantivy_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libtantivy_text_search.a)
add_library(mgcxx_text_search STATIC IMPORTED GLOBAL)
add_dependencies(mgcxx_text_search mgcxx-proj)
set_property(TARGET mgcxx_text_search PROPERTY IMPORTED_LOCATION ${MGCXX_ROOT}/lib/libmgcxx_text_search.a)
# We need to create the include directory first in order to be able to add it
# as an include directory. The header files in the include directory will be
# generated later during the build process.
file(MAKE_DIRECTORY ${MGCXX_ROOT}/include)
set_property(TARGET mgcxx_text_search PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${MGCXX_ROOT}/include)
# Setup NuRaft
import_external_library(nuraft STATIC
${CMAKE_CURRENT_SOURCE_DIR}/nuraft/lib/libnuraft.a
${CMAKE_CURRENT_SOURCE_DIR}/nuraft/include/)
find_package(OpenSSL REQUIRED)
target_link_libraries(nuraft INTERFACE ${OPENSSL_LIBRARIES})

View File

@ -5,7 +5,7 @@ index ee9b58c..31359a9 100644
@@ -48,7 +48,7 @@ option(LIBRDTSC_USE_PMU "Enables PMU usage on ARM platforms" OFF)
# | Library Build and Install Properties |
# +--------------------------------------------------------+
-add_library(rdtsc SHARED
+add_library(rdtsc
src/cycles.c
@ -14,7 +14,7 @@ index ee9b58c..31359a9 100644
@@ -72,15 +72,6 @@ target_include_directories(rdtsc
PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include
)
-# Install directory changes depending on build mode
-if (CMAKE_BUILD_TYPE MATCHES "^[Dd]ebug")
- # During debug, the library will be installed into a local directory
@ -27,3 +27,15 @@ index ee9b58c..31359a9 100644
# Specifying what to export when installing (GNUInstallDirs required)
install(TARGETS rdtsc
EXPORT librstsc-config
diff --git a/include/librdtsc/common_timer.h b/include/librdtsc/common_timer.h
index a6922d8..080dc77 100644
--- a/include/librdtsc/common_timer.h
+++ b/include/librdtsc/common_timer.h
@@ -2,6 +2,7 @@
#define LIBRDTSC_COMMON_TIMER_H
#include <librdtsc/common.h>
+#include <librdtsc/cycles.h>
extern uint64_t rdtsc_get_tsc_freq_arch();
extern uint64_t rdtsc_get_tsc_freq();

24
libs/nuraft2.1.0.patch Normal file
View File

@ -0,0 +1,24 @@
diff --git a/include/libnuraft/asio_service_options.hxx b/include/libnuraft/asio_service_options.hxx
index 8fe1ec9..9497355 100644
--- a/include/libnuraft/asio_service_options.hxx
+++ b/include/libnuraft/asio_service_options.hxx
@@ -17,6 +17,7 @@ limitations under the License.
#pragma once
+#include <cstdint>
#include <functional>
#include <string>
#include <system_error>
diff --git a/include/libnuraft/callback.hxx b/include/libnuraft/callback.hxx
index 7b71624..d48c1e2 100644
--- a/include/libnuraft/callback.hxx
+++ b/include/libnuraft/callback.hxx
@@ -18,6 +18,7 @@ limitations under the License.
#ifndef _CALLBACK_H_
#define _CALLBACK_H_
+#include <cstdint>
#include <functional>
#include <string>

View File

@ -1,21 +0,0 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6761929..6a369af 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -220,6 +220,7 @@ else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -momit-leaf-frame-pointer")
endif()
endif()
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-copy -Wno-unused-but-set-variable")
endif()
include(CheckCCompilerFlag)
@@ -997,7 +998,7 @@ if(NOT WIN32 OR ROCKSDB_INSTALL_ON_WINDOWS)
if(ROCKSDB_BUILD_SHARED)
install(
- TARGETS ${ROCKSDB_SHARED_LIB}
+ TARGETS ${ROCKSDB_SHARED_LIB} OPTIONAL
EXPORT RocksDBTargets
COMPONENT runtime
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"

13
libs/rocksdb8.1.1.patch Normal file
View File

@ -0,0 +1,13 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 598c728..816c705 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1242,7 +1242,7 @@ if(NOT WIN32 OR ROCKSDB_INSTALL_ON_WINDOWS)
if(ROCKSDB_BUILD_SHARED)
install(
- TARGETS ${ROCKSDB_SHARED_LIB}
+ TARGETS ${ROCKSDB_SHARED_LIB} OPTIONAL
EXPORT RocksDBTargets
COMPONENT runtime
ARCHIVE DESTINATION "${CMAKE_INSTALL_LIBDIR}"

View File

@ -71,8 +71,8 @@ file_get_try_double () {
if [ -z "$primary_url" ]; then echo "Primary should not be empty." && exit 1; fi
if [ -z "$secondary_url" ]; then echo "Secondary should not be empty." && exit 1; fi
filename="$(basename "$secondary_url")"
wget -nv "$primary_url" -O "$filename" || wget -nv "$secondary_url" -O "$filename" || exit 1
echo ""
# Redirect primary/cache to /dev/null to make it less confusing for a new contributor because only CI has access to the cache.
wget -nv "$primary_url" -O "$filename" >/dev/null 2>&1 || wget -nv "$secondary_url" -O "$filename" || exit 1
}
repo_clone_try_double () {
@ -86,8 +86,8 @@ repo_clone_try_double () {
if [ -z "$secondary_url" ]; then echo "Secondary should not be empty." && exit 1; fi
if [ -z "$folder_name" ]; then echo "Clone folder should not be empty." && exit 1; fi
if [ -z "$ref" ]; then echo "Git clone ref should not be empty." && exit 1; fi
clone "$primary_url" "$folder_name" "$ref" "$shallow" || clone "$secondary_url" "$folder_name" "$ref" "$shallow" || exit 1
echo ""
# Redirect primary/cache to /dev/null to make it less confusing for a new contributor because only CI has access to the cache.
clone "$primary_url" "$folder_name" "$ref" "$shallow" >/dev/null 2>&1 || clone "$secondary_url" "$folder_name" "$ref" "$shallow" || exit 1
}
# List all dependencies.
@ -117,11 +117,17 @@ declare -A primary_urls=(
["mgconsole"]="http://$local_cache_host/git/mgconsole.git"
["spdlog"]="http://$local_cache_host/git/spdlog"
["nlohmann"]="http://$local_cache_host/file/nlohmann/json/4f8fba14066156b73f1189a2b8bd568bde5284c5/single_include/nlohmann/json.hpp"
["neo4j"]="http://$local_cache_host/file/neo4j-community-3.2.3-unix.tar.gz"
["neo4j"]="http://$local_cache_host/file/neo4j-community-5.6.0-unix.tar.gz"
["librdkafka"]="http://$local_cache_host/git/librdkafka.git"
["protobuf"]="http://$local_cache_host/git/protobuf.git"
["pulsar"]="http://$local_cache_host/git/pulsar.git"
["librdtsc"]="http://$local_cache_host/git/librdtsc.git"
["ctre"]="http://$local_cache_host/file/hanickadot/compile-time-regular-expressions/v3.7.2/single-header/ctre.hpp"
["absl"]="http://$local_cache_host/git/abseil-cpp.git"
["jemalloc"]="http://$local_cache_host/git/jemalloc.git"
["range-v3"]="http://$local_cache_host/git/range-v3.git"
["nuraft"]="http://$local_cache_host/git/NuRaft.git"
["asio"]="http://$local_cache_host/git/asio.git"
)
# The goal of secondary urls is to have links to the "source of truth" of
@ -139,14 +145,20 @@ declare -A secondary_urls=(
["rocksdb"]="https://github.com/facebook/rocksdb.git"
["mgclient"]="https://github.com/memgraph/mgclient.git"
["pymgclient"]="https://github.com/memgraph/pymgclient.git"
["mgconsole"]="http://github.com/memgraph/mgconsole.git"
["mgconsole"]="https://github.com/memgraph/mgconsole.git"
["spdlog"]="https://github.com/gabime/spdlog"
["nlohmann"]="https://raw.githubusercontent.com/nlohmann/json/4f8fba14066156b73f1189a2b8bd568bde5284c5/single_include/nlohmann/json.hpp"
["neo4j"]="https://s3-eu-west-1.amazonaws.com/deps.memgraph.io/neo4j-community-3.2.3-unix.tar.gz"
["neo4j"]="https://dist.neo4j.org/neo4j-community-5.6.0-unix.tar.gz"
["librdkafka"]="https://github.com/edenhill/librdkafka.git"
["protobuf"]="https://github.com/protocolbuffers/protobuf.git"
["pulsar"]="https://github.com/apache/pulsar.git"
["librdtsc"]="https://github.com/gabrieleara/librdtsc.git"
["ctre"]="https://raw.githubusercontent.com/hanickadot/compile-time-regular-expressions/v3.7.2/single-header/ctre.hpp"
["absl"]="https://github.com/abseil/abseil-cpp.git"
["jemalloc"]="https://github.com/jemalloc/jemalloc.git"
["range-v3"]="https://github.com/ericniebler/range-v3.git"
["nuraft"]="https://github.com/eBay/NuRaft.git"
["asio"]="https://github.com/chriskohlhoff/asio.git"
)
# antlr
@ -158,12 +170,11 @@ pushd antlr4
git apply ../antlr4.10.1.patch
popd
# cppitertools v2.0 2019-12-23
cppitertools_ref="cb3635456bdb531121b82b4d2e3afc7ae1f56d47"
cppitertools_ref="v2.1" # 2021-01-15
repo_clone_try_double "${primary_urls[cppitertools]}" "${secondary_urls[cppitertools]}" "cppitertools" "$cppitertools_ref"
# rapidcheck
rapidcheck_tag="7bc7d302191a4f3d0bf005692677126136e02f60" # (2020-05-04)
rapidcheck_tag="1c91f40e64d87869250cfb610376c629307bf77d" # (2023-08-15)
repo_clone_try_double "${primary_urls[rapidcheck]}" "${secondary_urls[rapidcheck]}" "rapidcheck" "$rapidcheck_tag"
# google benchmark
@ -171,7 +182,7 @@ benchmark_tag="v1.6.0"
repo_clone_try_double "${primary_urls[gbenchmark]}" "${secondary_urls[gbenchmark]}" "benchmark" "$benchmark_tag" true
# google test
googletest_tag="release-1.8.0"
googletest_tag="v1.14.0"
repo_clone_try_double "${primary_urls[gtest]}" "${secondary_urls[gtest]}" "googletest" "$googletest_tag" true
# libbcrypt
@ -180,9 +191,9 @@ repo_clone_try_double "${primary_urls[libbcrypt]}" "${secondary_urls[libbcrypt]}
# neo4j
file_get_try_double "${primary_urls[neo4j]}" "${secondary_urls[neo4j]}"
tar -xzf neo4j-community-3.2.3-unix.tar.gz
mv neo4j-community-3.2.3 neo4j
rm neo4j-community-3.2.3-unix.tar.gz
tar -xzf neo4j-community-5.6.0-unix.tar.gz
mv neo4j-community-5.6.0 neo4j
rm neo4j-community-5.6.0-unix.tar.gz
# nlohmann json
# We wget header instead of cloning repo since repo is huge (lots of test data).
@ -192,10 +203,10 @@ cd json
file_get_try_double "${primary_urls[nlohmann]}" "${secondary_urls[nlohmann]}"
cd ..
rocksdb_tag="v6.14.6" # (2020-10-14)
rocksdb_tag="v8.1.1" # (2023-04-21)
repo_clone_try_double "${primary_urls[rocksdb]}" "${secondary_urls[rocksdb]}" "rocksdb" "$rocksdb_tag" true
pushd rocksdb
git apply ../rocksdb.patch
git apply ../rocksdb8.1.1.patch
popd
# mgclient
@ -208,10 +219,10 @@ pymgclient_tag="4f85c179e56302d46a1e3e2cf43509db65f062b3" # (2021-01-15)
repo_clone_try_double "${primary_urls[pymgclient]}" "${secondary_urls[pymgclient]}" "pymgclient" "$pymgclient_tag"
# mgconsole
mgconsole_tag="v1.1.0" # (2021-10-07)
mgconsole_tag="v1.4.0" # (2023-05-21)
repo_clone_try_double "${primary_urls[mgconsole]}" "${secondary_urls[mgconsole]}" "mgconsole" "$mgconsole_tag" true
spdlog_tag="v1.9.2" # (2021-08-12)
spdlog_tag="v1.12.0" # (2022-11-02)
repo_clone_try_double "${primary_urls[spdlog]}" "${secondary_urls[spdlog]}" "spdlog" "$spdlog_tag" true
# librdkafka
@ -238,3 +249,46 @@ repo_clone_try_double "${primary_urls[librdtsc]}" "${secondary_urls[librdtsc]}"
pushd librdtsc
git apply ../librdtsc.patch
popd
#ctre
mkdir -p ctre
cd ctre
file_get_try_double "${primary_urls[ctre]}" "${secondary_urls[ctre]}"
cd ..
# abseil 20230125.3
absl_ref="20230125.3"
repo_clone_try_double "${primary_urls[absl]}" "${secondary_urls[absl]}" "absl" "$absl_ref"
# jemalloc ea6b3e973b477b8061e0076bb257dbd7f3faa756
JEMALLOC_COMMIT_VERSION="5.2.1"
repo_clone_try_double "${primary_urls[jemalloc]}" "${secondary_urls[jemalloc]}" "jemalloc" "$JEMALLOC_COMMIT_VERSION"
# this is hack for cmake in libs to set path, and for FindJemalloc to use Jemalloc_INCLUDE_DIR
pushd jemalloc
./autogen.sh
MALLOC_CONF="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000" \
./configure \
--disable-cxx \
--with-lg-page=12 \
--with-lg-hugepage=21 \
--enable-shared=no --prefix=$working_dir \
--with-malloc-conf="background_thread:true,retain:false,percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000"
make -j$CPUS install
popd
#range-v3 release-0.12.0
range_v3_ref="release-0.12.0"
repo_clone_try_double "${primary_urls[range-v3]}" "${secondary_urls[range-v3]}" "rangev3" "$range_v3_ref"
# NuRaft
nuraft_tag="v2.1.0"
repo_clone_try_double "${primary_urls[nuraft]}" "${secondary_urls[nuraft]}" "nuraft" "$nuraft_tag" true
pushd nuraft
git apply ../nuraft2.1.0.patch
asio_tag="asio-1-29-0"
repo_clone_try_double "${primary_urls[asio]}" "${secondary_urls[asio]}" "asio" "$asio_tag" true
./prepare.sh
popd

View File

@ -36,7 +36,7 @@ ADDITIONAL USE GRANT: You may use the Licensed Work in accordance with the
3. using the Licensed Work to create a work or solution
which competes (or might reasonably be expected to
compete) with the Licensed Work.
CHANGE DATE: 2026-27-04
CHANGE DATE: 2028-21-01
CHANGE LICENSE: Apache License, Version 2.0
For information about alternative licensing arrangements, please visit: https://memgraph.com/legal.

View File

@ -2,8 +2,8 @@ MEMGRAPH
ENTERPRISE LICENCE AGREEMENT
Memgraph Limited is registered in England under registration 10195084 and has its registered office at Suite 4,
Ironstone House, Ironstone Way, Brixworth, Northampton, NN6 9UD (“Memgraph”).
Memgraph Limited is registered in England under registration 10195084 and has its registered office at 90a High Street,
Hertfordshire, Berkhamsted, HP4 2BL United Kingdom ("Memgraph").
Memgraph agrees to license and/or grant you (the “Customer”) access to the Software ( as defined below) and provide

202
licenses/third-party/abseil-cpp/LICENSE vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,218 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--- LLVM Exceptions to the Apache 2.0 License ----
As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into an Object form of such source code, you
may redistribute such embedded portions in such Object form without complying
with the conditions of Sections 4(a), 4(b) and 4(d) of the License.
In addition, if you combine or link compiled forms of this Software with
software that is licensed under the GPLv2 ("Combined Software") and if a
court of competent jurisdiction determines that the patent provision (Section
3), the indemnity provision (Section 9) or other Section of the License
conflicts with the conditions of the GPLv2, you may retroactively and
prospectively choose to deem waived or otherwise exclude such Section(s) of
the License, but only in their entirety and only with respect to the Combined
Software.

202
licenses/third-party/ldbc/LICENSE vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

12
pyproject.toml Normal file
View File

@ -0,0 +1,12 @@
[tool.black]
line-length = 120
include = '\.pyi?$'
extend-exclude = '''
/(
| .git
| .__pycache__
| build
| libs
| .cache
)/
'''

View File

@ -6,32 +6,81 @@ project(memgraph_query_modules)
disallow_in_source_build()
find_package(fmt REQUIRED)
# Everything that is installed here, should be under the "query_modules" component.
set(CMAKE_INSTALL_DEFAULT_COMPONENT_NAME "query_modules")
add_library(example SHARED example.c)
target_include_directories(example PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(example PRIVATE -Wall)
# Strip the library in release build.
string(TOLOWER ${CMAKE_BUILD_TYPE} lower_build_type)
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET example POST_BUILD
COMMAND strip -s $<TARGET_FILE:example>
COMMENT "Stripping symbols and sections from example module")
endif()
install(PROGRAMS $<TARGET_FILE:example>
add_library(example_c SHARED example.c)
target_include_directories(example_c PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(example_c PRIVATE -Wall)
target_link_libraries(example_c PRIVATE -static-libgcc -static-libstdc++)
# Strip C example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET example_c POST_BUILD
COMMAND strip -s $<TARGET_FILE:example_c>
COMMENT "Stripping symbols and sections from the C example module")
endif()
install(PROGRAMS $<TARGET_FILE:example_c>
DESTINATION lib/memgraph/query_modules
RENAME example.so)
RENAME example_c.so)
# Also install the source of the example, so user can read it.
install(FILES example.c DESTINATION lib/memgraph/query_modules/src)
# Install the Python example
install(FILES example.py DESTINATION lib/memgraph/query_modules RENAME py_example.py)
add_library(example_cpp SHARED example.cpp)
target_include_directories(example_cpp PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(example_cpp PRIVATE -Wall)
target_link_libraries(example_cpp PRIVATE -static-libgcc -static-libstdc++)
# Strip C++ example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET example_cpp POST_BUILD
COMMAND strip -s $<TARGET_FILE:example_cpp>
COMMENT "Stripping symbols and sections from the C++ example module")
endif()
install(PROGRAMS $<TARGET_FILE:example_cpp>
DESTINATION lib/memgraph/query_modules
RENAME example_cpp.so)
# Also install the source of the example, so user can read it.
install(FILES example.cpp DESTINATION lib/memgraph/query_modules/src)
# Install the Python modules
add_library(schema SHARED schema.cpp)
target_include_directories(schema PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(schema PRIVATE -Wall)
target_link_libraries(schema PRIVATE -static-libgcc -static-libstdc++)
# Strip C++ example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET schema POST_BUILD
COMMAND strip -s $<TARGET_FILE:schema>
COMMENT "Stripping symbols and sections from the C++ schema module")
endif()
install(PROGRAMS $<TARGET_FILE:schema>
DESTINATION lib/memgraph/query_modules
RENAME schema.so)
# Also install the source of the example, so user can read it.
install(FILES schema.cpp DESTINATION lib/memgraph/query_modules/src)
add_library(text SHARED text_search_module.cpp)
target_include_directories(text PRIVATE ${CMAKE_SOURCE_DIR}/include)
target_compile_options(text PRIVATE -Wall)
target_link_libraries(text PRIVATE -static-libgcc -static-libstdc++ fmt::fmt)
# Strip C++ example in release build.
if (lower_build_type STREQUAL "release")
add_custom_command(TARGET text POST_BUILD
COMMAND strip -s $<TARGET_FILE:text>
COMMENT "Stripping symbols and sections from the C++ text_search module")
endif()
install(PROGRAMS $<TARGET_FILE:text>
DESTINATION lib/memgraph/query_modules
RENAME text.so)
# Also install the source of the example, so user can read it.
install(FILES text_search_module.cpp DESTINATION lib/memgraph/query_modules/src)
# Install the Python example and modules
install(FILES example.py DESTINATION lib/memgraph/query_modules RENAME py_example.py)
install(FILES graph_analyzer.py DESTINATION lib/memgraph/query_modules)
install(FILES mgp_networkx.py DESTINATION lib/memgraph/query_modules)
install(FILES nxalg.py DESTINATION lib/memgraph/query_modules)
install(FILES wcc.py DESTINATION lib/memgraph/query_modules)
install(FILES mgps.py DESTINATION lib/memgraph/query_modules)
install(FILES convert.py DESTINATION lib/memgraph/query_modules)

10
query_modules/convert.py Normal file
View File

@ -0,0 +1,10 @@
from json import loads
import mgp
@mgp.function
def str2object(string: str) -> mgp.Any:
if string:
return loads(string)
return None

127
query_modules/example.cpp Normal file
View File

@ -0,0 +1,127 @@
// Copyright 2023 Memgraph Ltd.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt; by using this file, you agree to be bound by the terms of the Business Source
// License, and you may not use this file except in compliance with the Business Source License.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
#include <mgp.hpp>
void ProcImpl(std::vector<mgp::Value> arguments, mgp::Graph graph, mgp::RecordFactory record_factory) {
auto record = record_factory.NewRecord();
record.Insert("out", true);
}
void SampleReadProc(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
try {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
std::vector<mgp::Value> arguments;
for (size_t i = 0; i < mgp::list_size(args); i++) {
auto arg = mgp::Value(mgp::list_at(args, i));
arguments.push_back(arg);
}
ProcImpl(arguments, mgp::Graph(memgraph_graph), mgp::RecordFactory(result));
} catch (const std::exception &e) {
mgp::result_set_error_msg(result, e.what());
return;
}
}
void AddXNodes(mgp_list *args, mgp_graph *memgraph_graph, mgp_result *result, mgp_memory *memory) {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
auto graph = mgp::Graph(memgraph_graph);
std::vector<mgp::Value> arguments;
for (size_t i = 0; i < mgp::list_size(args); i++) {
auto arg = mgp::Value(mgp::list_at(args, i));
arguments.push_back(arg);
}
for (int i = 0; i < arguments[0].ValueInt(); i++) {
graph.CreateNode();
}
}
void Multiply(mgp_list *args, mgp_func_context *ctx, mgp_func_result *res, mgp_memory *memory) {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
std::vector<mgp::Value> arguments;
for (size_t i = 0; i < mgp::list_size(args); i++) {
auto arg = mgp::Value(mgp::list_at(args, i));
arguments.push_back(arg);
}
auto result = mgp::Result(res);
auto first = arguments[0].ValueInt();
auto second = arguments[1].ValueInt();
result.SetValue(first * second);
}
extern "C" int mgp_init_module(struct mgp_module *module, struct mgp_memory *memory) {
try {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
AddProcedure(SampleReadProc, "return_true", mgp::ProcedureType::Read,
{mgp::Parameter("param_1", mgp::Type::Int), mgp::Parameter("param_2", mgp::Type::Double, 2.3)},
{mgp::Return("out", mgp::Type::Bool)}, module, memory);
} catch (const std::exception &e) {
return 1;
}
try {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
mgp::AddProcedure(AddXNodes, "add_x_nodes", mgp::ProcedureType::Write, {mgp::Parameter("param_1", mgp::Type::Int)},
{}, module, memory);
} catch (const std::exception &e) {
return 1;
}
try {
// The outcommented way of assigning the memory pointer is still
// working, but it is deprecated because of certain concurrency
// issues. Please use the guard instead.
// mgp::memory = memory;
mgp::MemoryDispatcherGuard guard(memory);
mgp::AddFunction(Multiply, "multiply",
{mgp::Parameter("int", mgp::Type::Int), mgp::Parameter("int", mgp::Type::Int, (int64_t)3)}, module,
memory);
} catch (const std::exception &e) {
return 1;
}
return 0;
}
extern "C" int mgp_shutdown_module() { return 0; }

Some files were not shown because too many files have changed in this diff Show More