mirror of
https://github.com/Vonng/ddia.git
synced 2025-03-06 15:40:11 +08:00
update zh-tw content
This commit is contained in:
parent
84fcfa7158
commit
fd9bd1fbc5
2
ch1.md
2
ch1.md
@ -449,7 +449,7 @@ As always with abstractions in computing, there is no one right answer to what y
|
||||
|
||||
为解决这个问题,云原生服务通常避免使用虚拟磁盘,而是建立在专门为特定工作负载优化的专用存储服务之上。如 S3 等对象存储服务旨在长期存储相对较大的文件,大小从数百千字节到几个千兆字节不等。存储在数据库中的单独行或值通常比这小得多;因此云数据库通常在单独的服务中管理更小的值,并在对象存储中存储更大的数据块(包含许多单独的值) [[24](ch01.html#Antonopoulos2019_ch1)]。
|
||||
|
||||
在传统的系统架构中,同一台计算机负责存储(磁盘)和计算(CPU 和 RAM),但在云原生系统中,这两种责任已经有所分离或*解耦* [[8](ch01.html#Prout2022), [25](ch01.html#Vuppalapati2020), [26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/)] 例如,S3仅存储文件,如果你想分析那些数据,你将不得不在 S3 外部的某处运行分析代码。这意味着需要通过网络传输数据,我们将在[“分布式与单节点系统”](ch01.html#sec_introduction_distributed)中进一步讨论这一点。
|
||||
在传统的系统架构中,同一台计算机负责存储(磁盘)和计算(CPU 和 RAM),但在云原生系统中,这两种责任已经有所分离或*解耦* [[8](ch01.html#Prout2022), [25](ch01.html#Vuppalapati2020), [26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/)]。例如,S3仅存储文件,如果你想分析那些数据,你将不得不在 S3 外部的某处运行分析代码。这意味着需要通过网络传输数据,我们将在[“分布式与单节点系统”](ch01.html#sec_introduction_distributed)中进一步讨论这一点。
|
||||
|
||||
此外,云原生系统通常是*多租户*的,这意味着它们不是为每个客户配置单独的机器,而是在同一共享硬件上由同一服务处理来自几个不同客户的数据和计算 [[28](ch01.html#Vanlightly2023)]。多租户可以实现更好的硬件利用率、更容易的可扩展性和云提供商更容易的管理,但它也需要精心的工程设计,以确保一个客户的活动不影响系统对其他客户的性能或安全性 [[29](ch01.html#Jonas2019)]。
|
||||
|
||||
|
@ -14,6 +14,7 @@
|
||||
|
||||
**通知**:DDIA [**第二版**](https://github.com/Vonng/ddia/tree/v2) 正在翻譯中 (當前 [`v2`](https://github.com/Vonng/ddia/tree/v2)分支),歡迎加入並提出您的寶貴意見!
|
||||
|
||||
> **Contrib 指南**:目前放出的三章中,如果您對任意一個 Section (以 `###` 三級標題)的翻譯感興趣,建立一個 Issue 宣告鎖定,然後直接 PR 即可。
|
||||
|
||||
|
||||
--------
|
||||
|
353
zh-tw/ch1.md
353
zh-tw/ch1.md
@ -10,7 +10,7 @@
|
||||
|
||||
小量資料,可在單一機器上儲存和處理,通常相對容易處理。然而,隨著資料量或查詢率的增加,需要將資料分佈到多臺機器上,這引入了許多挑戰。隨著應用程式需求的複雜化,僅在一個系統中儲存所有資料已不再足夠,可能需要結合多個提供不同功能的儲存或處理系統。
|
||||
|
||||
如果資料管理是開發應用程式的主要挑戰之一,我們稱這類應用為*資料密集型* [[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kouzes2009)]。而在*計算密集型*系統中,挑戰在於並行處理一些非常大的計算,在資料密集型應用中,我們通常更關心的是如何儲存和處理大資料量、管理資料變化、在出現故障和併發時確保一致性以及確保服務的高可用性。
|
||||
如果資料管理是開發應用程式的主要挑戰之一,我們稱這類應用為*資料密集型* [[1](ch01.html#Kouzes2009)]。而在*計算密集型*系統中,挑戰在於並行處理一些非常大的計算,在資料密集型應用中,我們通常更關心的是如何儲存和處理大資料量、管理資料變化、在出現故障和併發時確保一致性以及確保服務的高可用性。
|
||||
|
||||
這類應用通常由提供常用功能的標準構建塊構成。例如,許多應用需要:
|
||||
|
||||
@ -22,11 +22,11 @@
|
||||
|
||||
在構建應用程式時,我們通常會採用幾個軟體系統或服務,如資料庫或 API,並用一些應用程式碼將它們粘合在一起。如果你完全按照資料系統的設計目的去做,那麼這個過程可能會非常容易。
|
||||
|
||||
然而,隨著你的應用變得更加雄心勃勃,挑戰也隨之而來。有許多不同特性的資料庫系統,適用於不同的目的——你該如何選擇使用哪一個?有各種各樣的快取方法,幾種構建搜尋索引的方式等等——你該如何權衡它們的利弊?你需要弄清楚哪些工具和哪些方法最適合手頭的任務,而且將工具組合起來時可能會很難做到一款工具單獨無法完成的事情。
|
||||
然而,隨著你的應用變得更加雄心勃勃,挑戰也隨之而來。有許多不同特性的資料庫系統,適用於不同的目的——你該如何選擇使用哪一個?有各種各樣的快取方法,幾種構建搜尋索引的方式等等——你該如何權衡它們的利弊?你需要弄清楚哪些工具和哪些方法最適合手頭的任務,而且將多款工具組合起來以完成單獨一款工具無法完成的事情也可能是很困難的。
|
||||
|
||||
本書是一本指南,旨在幫助你做出關於使用哪些技術以及如何組合它們的決策。正如你將看到的,沒有一種方法基本上比其他方法更好;每種方法都有其優缺點。透過這本書,你將學會提出正確的問題,評估和比較資料系統,從而找出最適合你的特定應用需求的方法。
|
||||
本書是一本指南,旨在幫助你做出關於使用哪些技術以及如何組合它們的決策。正如你將看到的,沒有任何一種方法從根本上比其他所有方法都好;每種方法都有其優缺點。透過這本書,你將學會提出正確的問題,以評估和比較資料系統,從而找出最適合你的特定應用需求的方法。
|
||||
|
||||
我們將從探索資料在當今組織中的典型使用方式開始我們的旅程。這裡的許多想法起源於*企業軟體*(即大型組織,如大公司和政府的軟體需求和工程實踐),因為歷史上只有大型組織擁有需要複雜技術解決方案的大資料量。如果你的資料量足夠小,你甚至可以簡單地將其儲存在電子表格中!然而,最近,較小的公司和初創企業管理大資料量並構建資料密集型系統也變得普遍。
|
||||
我們將從探索資料在當今組織中的典型使用方式開始我們的旅程。這裡的許多想法起源於*企業軟體*(即大型組織如大公司和政府的軟體需求和工程實踐),因為歷史上只有大型組織擁有需要複雜技術解決方案的大資料量。如果你的資料量足夠小,你甚至可以簡單地將其儲存在電子表格中!然而,最近,較小的公司和初創企業管理大資料量並構建資料密集型系統也變得普遍。
|
||||
|
||||
關於資料系統的一個關鍵挑戰是,不同的人需要用資料做非常不同的事情。如果你在一家公司工作,你和你的團隊會有一套優先事項,而另一個團隊可能完全有不同的目標,儘管你們可能都在處理同一資料集!此外,這些目標可能不會明確表達,這可能會導致誤解和對正確方法的爭議。
|
||||
|
||||
@ -37,13 +37,13 @@
|
||||
- 何時從單節點系統遷移到分散式系統([“分散式與單節點系統”](#分散式與單節點系統))
|
||||
- 平衡業務需求與使用者權利([“資料系統、法律與社會”](#資料系統法律與社會))
|
||||
|
||||
此外,本章將為我們接下來的書中提供必需的術語。
|
||||
此外,本章將為我們接下來的書中的內容提供必需的術語。
|
||||
|
||||
Data is central to much application development today. With web and mobile apps, software as a service (SaaS), and cloud services, it has become normal to store data from many different users in a shared server-based data infrastructure. Data from user activity, business transactions, devices and sensors needs to be stored and made available for analysis. As users interact with an application, they both read the data that is stored, and also generate more data.
|
||||
|
||||
Small amounts of data, which can be stored and processed on a single machine, are often fairly easy to deal with. However, as the data volume or the rate of queries grows, it needs to be distributed across multiple machines, which introduces many challenges. As the needs of the application become more complex, it is no longer sufficient to store everything in one system, but it might be necessary to combine multiple storage or processing systems that provide different capabilities.
|
||||
|
||||
We call an application *data-intensive* if data management is one of the primary challenges in developing the application [[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kouzes2009)]. While in *compute-intensive* systems the challenge is parallelizing some very large computation, in data-intensive applications we usually worry more about things like storing and processing large data volumes, managing changes to data, ensuring consistency in the face of failures and concurrency, and making sure services are highly available.
|
||||
We call an application *data-intensive* if data management is one of the primary challenges in developing the application [[1](ch01.html#Kouzes2009)]. While in *compute-intensive* systems the challenge is parallelizing some very large computation, in data-intensive applications we usually worry more about things like storing and processing large data volumes, managing changes to data, ensuring consistency in the face of failures and concurrency, and making sure services are highly available.
|
||||
|
||||
Such applications are typically built from standard building blocks that provide commonly needed functionality. For example, many applications need to:
|
||||
|
||||
@ -65,10 +65,10 @@ One of the key challenges with data systems is that different people need to do
|
||||
|
||||
To help you understand what choices you can make, this chapter compares several contrasting concepts, and explores their trade-offs:
|
||||
|
||||
- the difference between transaction processing and analytics ([“Transaction Processing versus Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_analytics));
|
||||
- pros and cons of cloud services and self-hosted systems ([“Cloud versus Self-Hosting”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_cloud));
|
||||
- when to move from single-node systems to distributed systems ([“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed)); and
|
||||
- balancing the needs of the business and the rights of the user ([“Data Systems, Law, and Society”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_compliance)).
|
||||
- the difference between transaction processing and analytics ([“Transaction Processing versus Analytics”](ch01.html#sec_introduction_analytics));
|
||||
- pros and cons of cloud services and self-hosted systems ([“Cloud versus Self-Hosting”](ch01.html#sec_introduction_cloud));
|
||||
- when to move from single-node systems to distributed systems ([“Distributed versus Single-Node Systems”](ch01.html#sec_introduction_distributed)); and
|
||||
- balancing the needs of the business and the rights of the user ([“Data Systems, Law, and Society”](ch01.html#sec_introduction_compliance)).
|
||||
|
||||
Moreover, this chapter will provide you with terminology that we will need for the rest of the book.
|
||||
|
||||
@ -76,11 +76,11 @@ Moreover, this chapter will provide you with terminology that we will need for t
|
||||
|
||||
### 術語:前端與後端
|
||||
|
||||
我們在本書中將討論的許多內容涉及*後端開發*。解釋該術語:對於網路應用程式,客戶端程式碼(在網頁瀏覽器中執行)被稱為*前端*,處理使用者請求的伺服器端程式碼被稱為*後端*。移動應用與前端類似,它們提供使用者介面,通常透過網際網路與伺服器端後端通訊。前端有時會在使用者裝置上本地管理資料[[2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kleppmann2019)],但最大的資料基礎設施挑戰通常存在於後端:前端只需要處理一個使用者的資料,而後端則代表*所有*使用者管理資料。
|
||||
我們在本書中將討論的許多內容涉及*後端開發*。解釋該術語:對於網路應用程式,客戶端程式碼(在網頁瀏覽器中執行)被稱為*前端*,處理使用者請求的伺服器端程式碼被稱為*後端*。移動應用與前端類似,它們提供使用者介面,通常透過網際網路與伺服器端後端通訊。前端有時會在使用者裝置上本地管理資料[[2](ch01.html#Kleppmann2019)],但最大的資料基礎設施挑戰通常存在於後端:前端只需要處理一個使用者的資料,而後端則代表*所有*使用者管理資料。
|
||||
|
||||
後端服務通常可以透過 HTTP 訪問;它通常包含一些應用程式程式碼,這些程式碼在一個或多個數據庫中讀寫資料,有時還會與額外的資料系統(如快取或訊息佇列)交互(我們可能統稱為*資料基礎設施*)。應用程式程式碼通常是*無狀態的*(即,當它完成處理一個 HTTP 請求後,它會忘記該請求的所有資訊),並且任何需要從一個請求傳遞到另一個請求的資訊都需要儲存在客戶端或伺服器端的資料基礎設施中。
|
||||
後端服務通常可以透過 HTTP 訪問;它通常包含一些應用程式程式碼,這些程式碼在一個或多個數據庫中讀寫資料,有時還會與額外的資料系統(如快取或訊息佇列)互動(我們可能統稱為*資料基礎設施*)。應用程式程式碼通常是*無狀態的*(即,當它完成處理一個 HTTP 請求後,它會忘記該請求的所有資訊),並且任何需要從一個請求傳遞到另一個請求的資訊都需要儲存在客戶端或伺服器端的資料基礎設施中。
|
||||
|
||||
Much of what we will discuss in this book relates to *backend development*. To explain that term: for web applications, the client-side code (which runs in a web browser) is called the *frontend*, and the server-side code that handles user requests is known as the *backend*. Mobile apps are similar to frontends in that they provide user interfaces, which often communicate over the Internet with a server-side backend. Frontends sometimes manage data locally on the user’s device [[2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kleppmann2019)], but the greatest data infrastructure challenges often lie in the backend: a frontend only needs to handle one user’s data, whereas the backend manages data on behalf of *all* of the users.
|
||||
Much of what we will discuss in this book relates to *backend development*. To explain that term: for web applications, the client-side code (which runs in a web browser) is called the *frontend*, and the server-side code that handles user requests is known as the *backend*. Mobile apps are similar to frontends in that they provide user interfaces, which often communicate over the Internet with a server-side backend. Frontends sometimes manage data locally on the user’s device [[2](ch01.html#Kleppmann2019)], but the greatest data infrastructure challenges often lie in the backend: a frontend only needs to handle one user’s data, whereas the backend manages data on behalf of *all* of the users.
|
||||
|
||||
A backend service is often reachable via HTTP; it usually consists of some application code that reads and writes data in one or more databases, and sometimes interfaces with additional data systems such as caches or message queues (which we might collectively call *data infrastructure*). The application code is often *stateless* (i.e., when it finishes handling one HTTP request, it forgets everything about that request), and any information that needs to persist from one request to another needs to be stored either on the client, or in the server-side data infrastructure.
|
||||
|
||||
@ -89,7 +89,7 @@ A backend service is often reachable via HTTP; it usually consists of some appli
|
||||
|
||||
## 事務處理與分析
|
||||
|
||||
如果你在企業中從事資料系統工作,你可能會遇到幾種不同型別的處理資料的人。第一種是*後端工程師*,他們構建處理讀取和更新資料請求的服務;這些服務通常直接或間接透過其他服務為外部使用者提供服務(見[“微服務和無伺服器”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_microservices))。有時服務是供組織內部其他部分使用的。
|
||||
如果你在企業中從事資料系統工作,你可能會遇到幾種不同型別的處理資料的人。第一種是*後端工程師*,他們構建處理讀取和更新資料請求的服務;這些服務通常直接或間接透過其他服務為外部使用者提供服務(見[“微服務和無伺服器”](ch01.html#sec_introduction_microservices))。有時服務是供組織內部其他部分使用的。
|
||||
|
||||
除了管理後端服務的團隊外,還有兩個群體通常需要訪問組織的資料:*商業分析師*,他們生成有關組織活動的報告以幫助管理層做出更好的決策(*商業智慧*或*BI*),以及*資料科學家*,他們在資料中尋找新的見解或建立由資料分析和機器學習/AI支援的面向使用者的產品功能(例如,電子商務網站上的“購買 X 的人也購買了 Y”推薦、風險評分或垃圾郵件過濾等預測分析,以及搜尋結果的排名)。
|
||||
|
||||
@ -98,11 +98,11 @@ A backend service is often reachable via HTTP; it usually consists of some appli
|
||||
- *業務系統*包括後端服務和資料基礎設施,資料是在那裡建立的,例如透過服務外部使用者。在這裡,應用程式程式碼根據使用者的操作讀取並修改其資料庫中的資料。
|
||||
- *分析系統*滿足商業分析師和資料科學家的需求。它們包含來自業務系統的資料的只讀副本,並針對分析所需的資料處理型別進行了最佳化。
|
||||
|
||||
正如我們將在下一節中看到的,出於充分的理由,操作和分析系統通常保持獨立。隨著這些系統的成熟,出現了兩個新的專業角色:*資料工程師*和*分析工程師*。資料工程師是瞭解如何整合操作和分析系統的人,他們負責組織的資料基礎設施的更廣泛管理[[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Reis2022)]。分析工程師建模和轉換資料,使其對查詢組織中資料的終端使用者更有用[[4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Machado2023)]。
|
||||
正如我們將在下一節中看到的,出於充分的理由,業務和分析系統通常保持獨立。隨著這些系統的成熟,出現了兩個新的專業角色:*資料工程師*和*分析工程師*。資料工程師是瞭解如何整合業務和分析系統的人,他們負責組織的資料基礎設施的更廣泛管理[[3](ch01.html#Reis2022)]。分析工程師建模和轉換資料,使其對查詢組織中的資料的終端使用者更有用[[4](ch01.html#Machado2023)]。
|
||||
|
||||
許多工程師專注於操作或分析的一側。然而,這本書涵蓋了操作和分析資料系統,因為兩者在組織內的資料生命週期中都扮演著重要的角色。我們將深入探討用於向內部和外部使用者提供服務的資料基礎設施,以便你能更好地與這一界限另一側的同事合作。
|
||||
許多工程師專注於業務或分析的一側。然而,這本書涵蓋了業務和分析資料系統,因為兩者在組織內的資料生命週期中都扮演著重要的角色。我們將深入探討用於向內部和外部使用者提供服務的資料基礎設施,以便你能更好地與這一界限另一側的同事合作。
|
||||
|
||||
If you are working on data systems in an enterprise, you are likely to encounter several different types of people who work with data. The first type are *backend engineers* who build services that handle requests for reading and updating data; these services often serve external users, either directly or indirectly via other services (see [“Microservices and Serverless”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_microservices)). Sometimes services are for internal use by other parts of the organization.
|
||||
If you are working on data systems in an enterprise, you are likely to encounter several different types of people who work with data. The first type are *backend engineers* who build services that handle requests for reading and updating data; these services often serve external users, either directly or indirectly via other services (see [“Microservices and Serverless”](ch01.html#sec_introduction_microservices)). Sometimes services are for internal use by other parts of the organization.
|
||||
|
||||
In addition to the teams managing backend services, two other groups of people typically require access to an organization’s data: *business analysts*, who generate reports about the activities of the organization in order to help the management make better decisions (*business intelligence* or *BI*), and *data scientists*, who look for novel insights in data or who create user-facing product features that are enabled by data analysis and machine learning/AI (for example, “people who bought X also bought Y” recommendations on an e-commerce website, predictive analytics such as risk scoring or spam filtering, and ranking of search results).
|
||||
|
||||
@ -111,7 +111,7 @@ Although business analysts and data scientists tend to use different tools and o
|
||||
- *Operational systems* consist of the backend services and data infrastructure where data is created, for example by serving external users. Here, the application code both reads and modifies the data in its databases, based on the actions performed by the users.
|
||||
- *Analytical systems* serve the needs of business analysts and data scientists. They contain a read-only copy of the data from the operational systems, and they are optimized for the types of data processing that are needed for analytics.
|
||||
|
||||
As we shall see in the next section, operational and analytical systems are often kept separate, for good reasons. As these systems have matured, two new specialized roles have emerged: *data engineers* and *analytics engineers*. Data engineers are the people who know how to integrate the operational and the analytical systems, and who take responsibility for the organization’s data infrastructure more widely [[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Reis2022)]. Analytics engineers model and transform data to make it more useful for end users querying data in an organization [[4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Machado2023)].
|
||||
As we shall see in the next section, operational and analytical systems are often kept separate, for good reasons. As these systems have matured, two new specialized roles have emerged: *data engineers* and *analytics engineers*. Data engineers are the people who know how to integrate the operational and the analytical systems, and who take responsibility for the organization’s data infrastructure more widely [[3](ch01.html#Reis2022)]. Analytics engineers model and transform data to make it more useful for end users querying data in an organization [[4](ch01.html#Machado2023)].
|
||||
|
||||
Many engineers specialize on either the operational or the analytical side. However, this book covers both operational and analytical data systems, since both play an important role in the lifecycle of data within an organization. We will explore in-depth the data infrastructure that is used to deliver services both to internal and external users, so that you can work better with your colleagues on the other side of this divide.
|
||||
|
||||
@ -128,13 +128,13 @@ In the early days of business data processing, a write to the database typically
|
||||
|
||||
儘管資料庫開始被用於許多不同型別的資料——社交媒體上的帖子、遊戲中的移動、地址簿中的聯絡人等——基本的訪問模式仍與處理商業交易類似。業務系統通常透過某個鍵查詢少量記錄(這稱為*點查詢*)。根據使用者的輸入,記錄被插入、更新或刪除。因為這些應用是互動式的,這種訪問模式被稱為*線上事務處理*(OLTP)。
|
||||
|
||||
然而,資料庫也越來越多地被用於分析,其訪問模式與OLTP有很大不同。通常,分析查詢會掃描大量記錄,並計算聚合統計資料(如計數、求和或平均值),而不是將個別記錄返回給使用者。例如,超市連鎖的商業分析師可能希望回答諸如:
|
||||
然而,資料庫也越來越多地被用於分析,其訪問模式與 OLTP 有很大不同。通常,分析查詢會掃描大量記錄,並計算聚合統計資料(如計數、求和或平均值),而不是將個別記錄返回給使用者。例如,連鎖超市的商業分析師可能希望回答諸如此類的問題:
|
||||
|
||||
- 我們的每家店在一月份的總收入是多少?
|
||||
- 我們在最近的促銷活動中賣出的香蕉比平時多多少?
|
||||
- 哪種品牌的嬰兒食品最常與X品牌的尿布一起購買?
|
||||
- 哪種品牌的嬰兒食品最常與某品牌的尿布一起購買?
|
||||
|
||||
這些型別的查詢所產生的報告對於商業智慧至關重要,幫助管理層決定下一步做什麼。為了區分使用資料庫的這種模式與事務處理的不同,它被稱為*線上分析處理*(OLAP)[[5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Codd1993)]。OLTP和分析之間的區別並不總是明確的,但[表1-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#tab_oltp_vs_olap)列出了一些典型的特徵。
|
||||
這些型別的查詢所產生的報告對於商業智慧至關重要,幫助管理層決定下一步做什麼。為了區分使用資料庫的這種模式與事務處理的不同,它被稱為*線上分析處理*(OLAP)[[5](ch01.html#Codd1993)]。OLTP 和分析之間的區別並不總是明確的,但[表1-1](ch01.html#tab_oltp_vs_olap)列出了一些典型的特徵。
|
||||
|
||||
| 屬性 | 業務系統 (OLTP) | 分析系統 (OLAP) |
|
||||
|--------|----------------|---------------|
|
||||
@ -156,9 +156,9 @@ However, databases also started being increasingly used for analytics, which has
|
||||
- How many more bananas than usual did we sell during our latest promotion?
|
||||
- Which brand of baby food is most often purchased together with brand X diapers?
|
||||
|
||||
The reports that result from these types of queries are important for business intelligence, helping the management decide what to do next. In order to differentiate this pattern of using databases from transaction processing, it has been called *online analytic processing* (OLAP) [[5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Codd1993)]. The difference between OLTP and analytics is not always clear-cut, but some typical characteristics are listed in [Table 1-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#tab_oltp_vs_olap).
|
||||
The reports that result from these types of queries are important for business intelligence, helping the management decide what to do next. In order to differentiate this pattern of using databases from transaction processing, it has been called *online analytic processing* (OLAP) [[5](ch01.html#Codd1993)]. The difference between OLTP and analytics is not always clear-cut, but some typical characteristics are listed in [Table 1-1](ch01.html#tab_oltp_vs_olap).
|
||||
|
||||
| Property | 運營系統 (OLTP) | 分析系統 (OLAP) |
|
||||
| Property | Operational System (OLTP) | Analytical System (OLAP) |
|
||||
|:--------------------|:------------------------------------------------|:------------------------------------------|
|
||||
| Main read pattern | Point queries (fetch individual records by key) | Aggregate over large number of records |
|
||||
| Main write pattern | Create, update, and delete individual records | Bulk import (ETL) or event stream |
|
||||
@ -170,9 +170,9 @@ The reports that result from these types of queries are important for business i
|
||||
|
||||
> 注意
|
||||
>
|
||||
> *線上*在*OLAP*中的含義並不清晰;它可能指的是分析師不僅僅用於預定義報告的查詢,而且還可以互動式地用於探索性查詢的事實。
|
||||
> *線上分析處理*中的*線上*一詞的含義並不清晰;它可能指的是分析師不僅僅查詢預定義的報告,而且還可以互動式地進行探索性的查詢。
|
||||
|
||||
在業務系統中,使用者通常不允許構建自定義SQL查詢並在資料庫上執行,因為這可能允許他們讀取或修改他們無權訪問的資料。此外,他們可能編寫執行成本高昂的查詢,從而影響其他使用者的資料庫效能。因此,OLTP系統大多執行固定的查詢集,這些查詢嵌入在應用程式程式碼中,僅偶爾使用一次性自定義查詢進行維護或故障排除。另一方面,分析資料庫通常允許使用者手動編寫任意SQL查詢,或使用資料視覺化或儀表板工具(如Tableau、Looker或Microsoft Power BI)自動生成查詢。
|
||||
在業務系統中,使用者通常不被允許構建自定義 SQL 查詢並在資料庫上執行,因為這可能允許他們讀取或修改他們無權訪問的資料。此外,他們可能編寫執行成本高昂的查詢,從而影響其他使用者的資料庫效能。因此,OLTP 系統大多執行固定的查詢集,這些查詢嵌入在應用程式程式碼中,僅偶爾使用一次性自定義查詢進行維護或故障排除。另一方面,分析資料庫通常允許使用者手動編寫任意 SQL 查詢,或使用資料視覺化或儀表板工具(如 Tableau、Looker 或 Microsoft Power BI)自動生成查詢。
|
||||
|
||||
The meaning of *online* in *OLAP* is unclear; it probably refers to the fact that queries are not just for predefined reports, but that analysts use the OLAP system interactively for explorative queries.
|
||||
|
||||
@ -188,13 +188,13 @@ With operational systems, users are generally not allowed to construct custom SQ
|
||||
通常不希望商業分析師和資料科學家直接查詢這些OLTP系統,原因有幾個:
|
||||
|
||||
- 感興趣的資料可能分佈在多個業務系統中,將這些資料集合併到單一查詢中很困難(一個稱為*資料孤島*的問題);
|
||||
- 適合OLTP的模式和資料佈局不太適合分析(見[“星型和雪花型:分析的模式”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics));
|
||||
- 適合OLTP的模式和資料佈局不太適合分析(見[“星型和雪花型:分析的模式”](ch03.html#sec_datamodels_analytics));
|
||||
- 分析查詢可能相當昂貴,如果在OLTP資料庫上執行,將影響其他使用者的效能;以及
|
||||
- OLTP系統可能位於一個不允許使用者直接訪問的單獨網路中,出於安全或合規原因。
|
||||
|
||||
與此相反,*資料倉庫*是一個單獨的資料庫,分析師可以盡情查詢,而不影響OLTP操作[[6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Chaudhuri1997)]。正如我們將在[即將提供連結]中看到的,資料倉庫通常以與OLTP資料庫非常不同的方式儲存資料,以最佳化常見於分析的查詢型別。
|
||||
與此相反,*資料倉庫*是一個單獨的資料庫,分析師可以盡情查詢,而不影響OLTP操作[[6](ch01.html#Chaudhuri1997)]。正如我們將在[即將提供連結]中看到的,資料倉庫通常以與OLTP資料庫非常不同的方式儲存資料,以最佳化常見於分析的查詢型別。
|
||||
|
||||
資料倉庫包含公司所有各種OLTP系統中的資料的只讀副本。資料從OLTP資料庫中提取(使用定期資料轉儲或持續更新流),轉換成便於分析的模式,清理後,然後載入到資料倉庫中。將資料獲取到資料倉庫的過程稱為*提取-轉換-載入*(ETL),並在[圖1-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#fig_dwh_etl)中進行了說明。有時*轉換*和*載入*的順序被交換(即在資料倉庫中載入後進行轉換),這就變成了*ELT*。
|
||||
資料倉庫包含公司所有各種OLTP系統中的資料的只讀副本。資料從OLTP資料庫中提取(使用定期資料轉儲或持續更新流),轉換成便於分析的模式,清理後,然後載入到資料倉庫中。將資料獲取到資料倉庫的過程稱為*提取-轉換-載入*(ETL),並在[圖1-1](ch01.html#fig_dwh_etl)中進行了說明。有時*轉換*和*載入*的順序被交換(即在資料倉庫中載入後進行轉換),這就變成了*ELT*。
|
||||
|
||||
At first, the same databases were used for both transaction processing and analytic queries. SQL turned out to be quite flexible in this regard: it works well for both types of queries. Nevertheless, in the late 1980s and early 1990s, there was a trend for companies to stop using their OLTP systems for analytics purposes, and to run the analytics on a separate database system instead. This separate database was called a *data warehouse*.
|
||||
|
||||
@ -203,13 +203,13 @@ A large enterprise may have dozens, even hundreds, of operational transaction pr
|
||||
It is usually undesirable for business analysts and data scientists to directly query these OLTP systems, for several reasons:
|
||||
|
||||
- the data of interest may be spread across multiple operational systems, making it difficult to combine those datasets in a single query (a problem known as *data silos*);
|
||||
- the kinds of schemas and data layouts that are good for OLTP are less well suited for analytics (see [“Stars and Snowflakes: Schemas for Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics));
|
||||
- the kinds of schemas and data layouts that are good for OLTP are less well suited for analytics (see [“Stars and Snowflakes: Schemas for Analytics”](ch03.html#sec_datamodels_analytics));
|
||||
- analytic queries can be quite expensive, and running them on an OLTP database would impact the performance for other users; and
|
||||
- the OLTP systems might reside in a separate network that users are not allowed direct access to for security or compliance reasons.
|
||||
|
||||
A *data warehouse*, by contrast, is a separate database that analysts can query to their hearts’ content, without affecting OLTP operations [[6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Chaudhuri1997)]. As we shall see in [Link to Come], data warehouses often store data in a way that is very different from OLTP databases, in order to optimize for the types of queries that are common in analytics.
|
||||
A *data warehouse*, by contrast, is a separate database that analysts can query to their hearts’ content, without affecting OLTP operations [[6](ch01.html#Chaudhuri1997)]. As we shall see in [Link to Come], data warehouses often store data in a way that is very different from OLTP databases, in order to optimize for the types of queries that are common in analytics.
|
||||
|
||||
The data warehouse contains a read-only copy of the data in all the various OLTP systems in the company. Data is extracted from OLTP databases (using either a periodic data dump or a continuous stream of updates), transformed into an analysis-friendly schema, cleaned up, and then loaded into the data warehouse. This process of getting data into the data warehouse is known as *Extract–Transform–Load* (ETL) and is illustrated in [Figure 1-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#fig_dwh_etl). Sometimes the order of the *transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse, after loading), resulting in *ELT*.
|
||||
The data warehouse contains a read-only copy of the data in all the various OLTP systems in the company. Data is extracted from OLTP databases (using either a periodic data dump or a continuous stream of updates), transformed into an analysis-friendly schema, cleaned up, and then loaded into the data warehouse. This process of getting data into the data warehouse is known as *Extract–Transform–Load* (ETL) and is illustrated in [Figure 1-1](ch01.html#fig_dwh_etl). Sometimes the order of the *transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse, after loading), resulting in *ELT*.
|
||||
|
||||
|
||||

|
||||
@ -219,62 +219,62 @@ The data warehouse contains a read-only copy of the data in all the various OLTP
|
||||
|
||||
在某些情況下,ETL過程的資料來源是外部的SaaS產品,如客戶關係管理(CRM)、電子郵件營銷或信用卡處理系統。在這些情況下,你無法直接訪問原始資料庫,因為它只能透過軟體供應商的API訪問。將這些外部系統的資料引入你自己的資料倉庫,可以啟用SaaS API無法實現的分析。對於SaaS API的ETL通常由專業的資料連線服務實現,如Fivetran、Singer或AirByte。
|
||||
|
||||
有些資料庫系統提供*混合事務/分析處理*(HTAP),旨在在單一系統中同時啟用OLTP和分析,無需從一個系統向另一個系統進行ETL [[7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Ozcan2017),[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Prout2022)]。然而,許多HTAP系統內部由一個OLTP系統與一個獨立的分析系統組成,這些系統透過一個公共介面隱藏——因此,理解這兩者之間的區別對於理解這些系統的工作方式非常重要。
|
||||
有些資料庫系統提供*混合事務/分析處理*(HTAP),旨在在單一系統中同時啟用OLTP和分析,無需從一個系統向另一個系統進行ETL [[7](ch01.html#Ozcan2017),[8](ch01.html#Prout2022)]。然而,許多HTAP系統內部由一個OLTP系統與一個獨立的分析系統組成,這些系統透過一個公共介面隱藏——因此,理解這兩者之間的區別對於理解這些系統的工作方式非常重要。
|
||||
|
||||
此外,儘管存在HTAP,由於它們目標和要求的不同,事務性和分析性系統之間的分離仍然很常見。特別是,每個業務系統擁有自己的資料庫被視為良好的實踐(見[“微服務與無伺服器”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_microservices)),導致有數百個獨立的操作資料庫;另一方面,一個企業通常只有一個數據倉庫,這樣業務分析師可以在單個查詢中合併來自幾個業務系統的資料。
|
||||
此外,儘管存在HTAP,由於它們目標和要求的不同,事務性和分析性系統之間的分離仍然很常見。特別是,每個業務系統擁有自己的資料庫被視為良好的實踐(見[“微服務與無伺服器”](ch01.html#sec_introduction_microservices)),導致有數百個獨立的操作資料庫;另一方面,一個企業通常只有一個數據倉庫,這樣業務分析師可以在單個查詢中合併來自幾個業務系統的資料。
|
||||
|
||||
業務系統和分析系統之間的分離是一個更廣泛趨勢的一部分:隨著工作負載變得更加苛刻,系統變得更加專業化,併為特定工作負載最佳化。通用系統可以舒適地處理小資料量,但規模越大,系統趨向於變得更加專業化 [[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stonebraker2005fitsall)]。
|
||||
業務系統和分析系統之間的分離是一個更廣泛趨勢的一部分:隨著工作負載變得更加苛刻,系統變得更加專業化,併為特定工作負載最佳化。通用系統可以舒適地處理小資料量,但規模越大,系統趨向於變得更加專業化 [[9](ch01.html#Stonebraker2005fitsall)]。
|
||||
|
||||
In some cases the data sources of the ETL processes are external SaaS products such as customer relationship management (CRM), email marketing, or credit card processing systems. In those cases, you do not have direct access to the original database, since it is accessible only via the software vendor’s API. Bringing the data from these external systems into your own data warehouse can enable analyses that are not possible via the SaaS API. ETL for SaaS APIs is often implemented by specialist data connector services such as Fivetran, Singer, or AirByte.
|
||||
|
||||
Some database systems offer *hybrid transactional/analytic processing* (HTAP), which aims to enable OLTP and analytics in a single system without requiring ETL from one system into another [[7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Ozcan2017), [8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Prout2022)]. However, many HTAP systems internally consist of an OLTP system coupled with a separate analytical system, hidden behind a common interface—so the distinction beween the two remains important for understanding how these systems work.
|
||||
Some database systems offer *hybrid transactional/analytic processing* (HTAP), which aims to enable OLTP and analytics in a single system without requiring ETL from one system into another [[7](ch01.html#Ozcan2017), [8](ch01.html#Prout2022)]. However, many HTAP systems internally consist of an OLTP system coupled with a separate analytical system, hidden behind a common interface—so the distinction beween the two remains important for understanding how these systems work.
|
||||
|
||||
Moreover, even though HTAP exists, it is common to have a separation between transactional and analytic systems due to their different goals and requirements. In particular, it is considered good practice for each operational system to have its own database (see [“Microservices and Serverless”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_microservices)), leading to hundreds of separate operational databases; on the other hand, an enterprise usually has a single data warehouse, so that business analysts can combine data from several operational systems in a single query.
|
||||
Moreover, even though HTAP exists, it is common to have a separation between transactional and analytic systems due to their different goals and requirements. In particular, it is considered good practice for each operational system to have its own database (see [“Microservices and Serverless”](ch01.html#sec_introduction_microservices)), leading to hundreds of separate operational databases; on the other hand, an enterprise usually has a single data warehouse, so that business analysts can combine data from several operational systems in a single query.
|
||||
|
||||
The separation between operational and analytical systems is part of a wider trend: as workloads have become more demanding, systems have become more specialized and optimized for particular workloads. General-purpose systems can handle small data volumes comfortably, but the greater the scale, the more specialized systems tend to become [[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stonebraker2005fitsall)].
|
||||
The separation between operational and analytical systems is part of a wider trend: as workloads have become more demanding, systems have become more specialized and optimized for particular workloads. General-purpose systems can handle small data volumes comfortably, but the greater the scale, the more specialized systems tend to become [[9](ch01.html#Stonebraker2005fitsall)].
|
||||
|
||||
#### 從資料倉庫到資料湖
|
||||
|
||||
資料倉庫通常使用*關係*資料模型,透過SQL查詢(見[第3章](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#ch_datamodels)),可能使用專業的商業智慧軟體。這種模型很適合業務分析師需要進行的型別的查詢,但它不太適合資料科學家的需求,他們可能需要執行的任務如下:
|
||||
資料倉庫通常使用*關係*資料模型,透過SQL查詢(見[第3章](ch03.html#ch_datamodels)),可能使用專業的商業智慧軟體。這種模型很適合業務分析師需要進行的型別的查詢,但它不太適合資料科學家的需求,他們可能需要執行的任務如下:
|
||||
|
||||
- 將資料轉換成適合訓練機器學習模型的形式;這通常需要將資料庫表的行和列轉換為稱為*特徵*的數字值向量或矩陣。以一種最大化訓練模型效能的方式執行這種轉換的過程稱為*特徵工程*,它通常需要使用SQL難以表達的自定義程式碼。
|
||||
- 獲取文字資料(例如,產品評論)並使用自然語言處理技術嘗試從中提取結構化資訊(例如,作者的情感或他們提到的主題)。類似地,他們可能需要使用計算機視覺技術從照片中提取結構化資訊。
|
||||
|
||||
儘管已經努力在SQL資料模型中新增機器學習運算子 [[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cohen2009)] 並在關係基礎上構建高效的機器學習系統 [[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Olteanu2020)],許多資料科學家更喜歡不在資料倉庫這類關係資料庫中工作。相反,許多人更喜歡使用如pandas和scikit-learn這樣的Python資料分析庫,統計分析語言如R,以及分散式分析框架如Spark [[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bornstein2020)]。我們在[“資料框架、矩陣和陣列”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_dataframes)中進一步討論這些內容。
|
||||
儘管已經努力在SQL資料模型中新增機器學習運算子 [[10](ch01.html#Cohen2009)] 並在關係基礎上構建高效的機器學習系統 [[11](ch01.html#Olteanu2020)],許多資料科學家更喜歡不在資料倉庫這類關係資料庫中工作。相反,許多人更喜歡使用如pandas和scikit-learn這樣的Python資料分析庫,統計分析語言如R,以及分散式分析框架如Spark [[12](ch01.html#Bornstein2020)]。我們在[“資料框架、矩陣和陣列”](ch03.html#sec_datamodels_dataframes)中進一步討論這些內容。
|
||||
|
||||
因此,組織面臨著使資料以適合資料科學家使用的形式可用的需求。答案是*資料湖*:一個集中的資料儲存庫,存放可能對分析有用的任何資料,透過ETL過程從業務系統獲取。與資料倉庫的不同之處在於,資料湖只包含檔案,不強加任何特定的檔案格式或資料模型。資料湖中的檔案可能是使用如Avro或Parquet等檔案格式編碼的資料庫記錄集合(見[連結即將到來]),但它們同樣可能包含文字、影像、影片、感測器讀數、稀疏矩陣、特徵向量、基因序列或任何其他型別的資料 [[13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fowler2015)]。
|
||||
因此,組織面臨著使資料以適合資料科學家使用的形式可用的需求。答案是*資料湖*:一個集中的資料儲存庫,存放可能對分析有用的任何資料,透過ETL過程從業務系統獲取。與資料倉庫的不同之處在於,資料湖只包含檔案,不強加任何特定的檔案格式或資料模型。資料湖中的檔案可能是使用如Avro或Parquet等檔案格式編碼的資料庫記錄集合(見[連結即將到來]),但它們同樣可能包含文字、影像、影片、感測器讀數、稀疏矩陣、特徵向量、基因序列或任何其他型別的資料 [[13](ch01.html#Fowler2015)]。
|
||||
|
||||
ETL過程已經概括為*資料管道*,在某些情況下,資料湖已成為從業務系統到資料倉庫的中間停靠點。資料湖包含由業務系統產生的“原始”形式的資料,而不是轉換成關係資料倉庫架構的資料。這種方法的優點是,每個資料的消費者都可以將原始資料轉換成最適合其需要的形式。這被稱為*壽司原則*:“原始資料更好” [[14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Johnson2015)]。
|
||||
ETL過程已經概括為*資料管道*,在某些情況下,資料湖已成為從業務系統到資料倉庫的中間停靠點。資料湖包含由業務系統產生的“原始”形式的資料,而不是轉換成關係資料倉庫架構的資料。這種方法的優點是,每個資料的消費者都可以將原始資料轉換成最適合其需要的形式。這被稱為*壽司原則*:“原始資料更好” [[14](ch01.html#Johnson2015)]。
|
||||
|
||||
除了從資料湖載入資料到單獨的資料倉庫外,還可以直接在資料湖中的檔案上執行典型的資料倉庫工作負載(SQL查詢和商業分析),以及資料科學/機器學習工作負載。這種架構被稱為*資料湖倉*,它需要一個查詢執行引擎和一個元資料(例如,模式管理)層來擴充套件資料湖的檔案儲存 [[15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Armbrust2021)]。Apache Hive、Spark SQL、Presto和Trino是這種方法的例子。
|
||||
除了從資料湖載入資料到單獨的資料倉庫外,還可以直接在資料湖中的檔案上執行典型的資料倉庫工作負載(SQL查詢和商業分析),以及資料科學/機器學習工作負載。這種架構被稱為*資料湖倉*,它需要一個查詢執行引擎和一個元資料(例如,模式管理)層來擴充套件資料湖的檔案儲存 [[15](ch01.html#Armbrust2021)]。Apache Hive、Spark SQL、Presto和Trino是這種方法的例子。
|
||||
|
||||
A data warehouse often uses a *relational* data model that is queried through SQL (see [Chapter 3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#ch_datamodels)), perhaps using specialized business intelligence software. This model works well for the types of queries that business analysts need to make, but it is less well suited to the needs of data scientists, who might need to perform tasks such as:
|
||||
A data warehouse often uses a *relational* data model that is queried through SQL (see [Chapter 3](ch03.html#ch_datamodels)), perhaps using specialized business intelligence software. This model works well for the types of queries that business analysts need to make, but it is less well suited to the needs of data scientists, who might need to perform tasks such as:
|
||||
|
||||
- Transform data into a form that is suitable for training a machine learning model; often this requires turning the rows and columns of a database table into a vector or matrix of numerical values called *features*. The process of performing this transformation in a way that maximizes the performance of the trained model is called *feature engineering*, and it often requires custom code that is difficult to express using SQL.
|
||||
- Take textual data (e.g., reviews of a product) and use natural language processing techniques to try to extract structured information from it (e.g., the sentiment of the author, or which topics they mention). Similarly, they might need to extract structured information from photos using computer vision techniques.
|
||||
|
||||
Although there have been efforts to add machine learning operators to a SQL data model [[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cohen2009)] and to build efficient machine learning systems on top of a relational foundation [[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Olteanu2020)], many data scientists prefer not to work in a relational database such as a data warehouse. Instead, many prefer to use Python data analysis libraries such as pandas and scikit-learn, statistical analysis languages such as R, and distributed analytics frameworks such as Spark [[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bornstein2020)]. We discuss these further in [“Dataframes, Matrices, and Arrays”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_dataframes).
|
||||
Although there have been efforts to add machine learning operators to a SQL data model [[10](ch01.html#Cohen2009)] and to build efficient machine learning systems on top of a relational foundation [[11](ch01.html#Olteanu2020)], many data scientists prefer not to work in a relational database such as a data warehouse. Instead, many prefer to use Python data analysis libraries such as pandas and scikit-learn, statistical analysis languages such as R, and distributed analytics frameworks such as Spark [[12](ch01.html#Bornstein2020)]. We discuss these further in [“Dataframes, Matrices, and Arrays”](ch03.html#sec_datamodels_dataframes).
|
||||
|
||||
Consequently, organizations face a need to make data available in a form that is suitable for use by data scientists. The answer is a *data lake*: a centralized data repository that holds a copy of any data that might be useful for analysis, obtained from operational systems via ETL processes. The difference from a data warehouse is that a data lake simply contains files, without imposing any particular file format or data model. Files in a data lake might be collections of database records, encoded using a file format such as Avro or Parquet (see [Link to Come]), but they can equally well contain text, images, videos, sensor readings, sparse matrices, feature vectors, genome sequences, or any other kind of data [[13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fowler2015)].
|
||||
Consequently, organizations face a need to make data available in a form that is suitable for use by data scientists. The answer is a *data lake*: a centralized data repository that holds a copy of any data that might be useful for analysis, obtained from operational systems via ETL processes. The difference from a data warehouse is that a data lake simply contains files, without imposing any particular file format or data model. Files in a data lake might be collections of database records, encoded using a file format such as Avro or Parquet (see [Link to Come]), but they can equally well contain text, images, videos, sensor readings, sparse matrices, feature vectors, genome sequences, or any other kind of data [[13](ch01.html#Fowler2015)].
|
||||
|
||||
ETL processes have been generalized to *data pipelines*, and in some cases the data lake has become an intermediate stop on the path from the operational systems to the data warehouse. The data lake contains data in a “raw” form produced by the operational systems, without the transformation into a relational data warehouse schema. This approach has the advantage that each consumer of the data can transform the raw data into a form that best suits their needs. It has been dubbed the *sushi principle*: “raw data is better” [[14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Johnson2015)].
|
||||
ETL processes have been generalized to *data pipelines*, and in some cases the data lake has become an intermediate stop on the path from the operational systems to the data warehouse. The data lake contains data in a “raw” form produced by the operational systems, without the transformation into a relational data warehouse schema. This approach has the advantage that each consumer of the data can transform the raw data into a form that best suits their needs. It has been dubbed the *sushi principle*: “raw data is better” [[14](ch01.html#Johnson2015)].
|
||||
|
||||
Besides loading data from a data lake into a separate data warehouse, it is also possible to run typical data warehousing workloads (SQL queries and business analytics) directly on the files in the data lake, alongside data science/machine learning workloads. This architecture is known as a *data lakehouse*, and it requires a query execution engine and a metadata (e.g., schema management) layer that extend the data lake’s file storage [[15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Armbrust2021)]. Apache Hive, Spark SQL, Presto, and Trino are examples of this approach.
|
||||
Besides loading data from a data lake into a separate data warehouse, it is also possible to run typical data warehousing workloads (SQL queries and business analytics) directly on the files in the data lake, alongside data science/machine learning workloads. This architecture is known as a *data lakehouse*, and it requires a query execution engine and a metadata (e.g., schema management) layer that extend the data lake’s file storage [[15](ch01.html#Armbrust2021)]. Apache Hive, Spark SQL, Presto, and Trino are examples of this approach.
|
||||
|
||||
|
||||
#### 資料湖之外
|
||||
|
||||
隨著分析實踐的成熟,組織越來越關注分析系統和資料管道的管理和運營,例如在DataOps宣言中捕捉到的內容 [[16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#DataOps)]。其中包括治理、隱私和遵守像GDPR和CCPA這樣的法規問題,我們將在[“資料系統、法律與社會”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_compliance)和[即將到來的連結]中討論。
|
||||
隨著分析實踐的成熟,組織越來越關注分析系統和資料管道的管理和運營,例如在DataOps宣言中捕捉到的內容 [[16](ch01.html#DataOps)]。其中包括治理、隱私和遵守像GDPR和CCPA這樣的法規問題,我們將在[“資料系統、法律與社會”](ch01.html#sec_introduction_compliance)和[即將到來的連結]中討論。
|
||||
|
||||
此外,分析資料越來越多地不僅以檔案和關係表的形式提供,還以事件流的形式提供(見[即將到來的連結])。使用基於檔案的資料分析,你可以定期(例如,每天)重新執行分析,以響應資料的變化,但流處理允許分析系統更快地響應事件,大約在幾秒鐘的數量級。根據應用程式和時間敏感性,流處理方法可以很有價值,例如識別並阻止潛在的欺詐或濫用行為。
|
||||
|
||||
在某些情況下,分析系統的輸出會提供給業務系統(有時被稱為*反向ETL* [[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Manohar2021)])。例如,一個在分析系統中訓練的機器學習模型可能被部署到生產中,以便它可以為終端使用者生成推薦,如“購買X的人也買了Y”。這些部署的分析系統輸出也被稱為*資料產品* [[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ORegan2018)]。機器學習模型可以使用TFX、Kubeflow或MLflow等專門工具部署到業務系統中。
|
||||
在某些情況下,分析系統的輸出會提供給業務系統(有時被稱為*反向ETL* [[17](ch01.html#Manohar2021)])。例如,一個在分析系統中訓練的機器學習模型可能被部署到生產中,以便它可以為終端使用者生成推薦,如“購買X的人也買了Y”。這些部署的分析系統輸出也被稱為*資料產品* [[18](ch01.html#ORegan2018)]。機器學習模型可以使用TFX、Kubeflow或MLflow等專門工具部署到業務系統中。
|
||||
|
||||
As analytics practices have matured, organizations have been increasingly paying attention to the management and operations of analytics systems and data pipelines, as captured for example in the DataOps manifesto [[16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#DataOps)]. Part of this are issues of governance, privacy, and compliance with regulation such as GDPR and CCPA, which we discuss in [“Data Systems, Law, and Society”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_compliance) and [Link to Come].
|
||||
As analytics practices have matured, organizations have been increasingly paying attention to the management and operations of analytics systems and data pipelines, as captured for example in the DataOps manifesto [[16](ch01.html#DataOps)]. Part of this are issues of governance, privacy, and compliance with regulation such as GDPR and CCPA, which we discuss in [“Data Systems, Law, and Society”](ch01.html#sec_introduction_compliance) and [Link to Come].
|
||||
|
||||
Moreover, analytical data is increasingly made available not only as files and relational tables, but also as streams of events (see [Link to Come]). With file-based data analysis you can re-run the analysis periodically (e.g., daily) in order to respond to changes in the data, but stream processing allows analytics systems to respond to events much faster, on the order of seconds. Depending on the application and how time-sensitive it is, a stream processing approach can be valuable, for example to identify and block potentially fraudulent or abusive activity.
|
||||
|
||||
In some cases the outputs of analytics systems are made available to operational systems (a process sometimes known as *reverse ETL* [[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Manohar2021)]). For example, a machine-learning model that was trained on data in an analytics system may be deployed to production, so that it can generate recommendations for end-users, such as “people who bought X also bought Y”. Such deployed outputs of analytics systems are also known as *data products* [[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ORegan2018)]. Machine learning models can be deployed to operational systems using specialized tools such as TFX, Kubeflow, or MLflow.
|
||||
In some cases the outputs of analytics systems are made available to operational systems (a process sometimes known as *reverse ETL* [[17](ch01.html#Manohar2021)]). For example, a machine-learning model that was trained on data in an analytics system may be deployed to production, so that it can generate recommendations for end-users, such as “people who bought X also bought Y”. Such deployed outputs of analytics systems are also known as *data products* [[18](ch01.html#ORegan2018)]. Machine learning models can be deployed to operational systems using specialized tools such as TFX, Kubeflow, or MLflow.
|
||||
|
||||
|
||||
### 記錄系統與衍生資料系統
|
||||
@ -283,7 +283,7 @@ In some cases the outputs of analytics systems are made available to operational
|
||||
|
||||
- 記錄系統
|
||||
|
||||
記錄系統,也稱為*真實來源*,持有某些資料的權威或*規範*版本。當新資料進入時,例如作為使用者輸入,首先在此處寫入。每個事實只表示一次(通常是*規範化*的;見[“規範化、反規範化和連線”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_normalization))。如果另一個系統與記錄系統之間存在任何差異,則記錄系統中的值(按定義)是正確的。
|
||||
記錄系統,也稱為*真實來源*,持有某些資料的權威或*規範*版本。當新資料進入時,例如作為使用者輸入,首先在此處寫入。每個事實只表示一次(通常是*規範化*的;見[“規範化、反規範化和連線”](ch03.html#sec_datamodels_normalization))。如果另一個系統與記錄系統之間存在任何差異,則記錄系統中的值(按定義)是正確的。
|
||||
|
||||
- 衍生資料系統
|
||||
|
||||
@ -303,7 +303,7 @@ Related to the distinction between operational and analytical systems, this book
|
||||
|
||||
- Systems of record
|
||||
|
||||
A system of record, also known as *source of truth*, holds the authoritative or *canonical* version of some data. When new data comes in, e.g., as user input, it is first written here. Each fact is represented exactly once (the representation is typically *normalized*; see [“Normalization, Denormalization, and Joins”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_normalization)). If there is any discrepancy between another system and the system of record, then the value in the system of record is (by definition) the correct one.
|
||||
A system of record, also known as *source of truth*, holds the authoritative or *canonical* version of some data. When new data comes in, e.g., as user input, it is first written here. Each fact is represented exactly once (the representation is typically *normalized*; see [“Normalization, Denormalization, and Joins”](ch03.html#sec_datamodels_normalization)). If there is any discrepancy between another system and the system of record, then the value in the system of record is (by definition) the correct one.
|
||||
|
||||
- Derived data systems
|
||||
|
||||
@ -328,15 +328,15 @@ That brings us to the end of our comparison of analytics and transaction process
|
||||
|
||||
對於組織需要執行的任何事務,首先要問的問題之一是:應該在內部完成還是外包?您應該自行構建還是購買?
|
||||
|
||||
這最終是一個關於業務優先順序的問題。管理學的普遍觀點是,作為組織的核心能力或競爭優勢的事物應該在內部完成,而非核心、常規或普通的事務則應交給供應商處理 [[19](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fournier2021)]。舉一個極端的例子,大多數公司不會自己發電(除非它們是能源公司,且不考慮緊急備用電力),因為從電網購買電力更便宜。
|
||||
這最終是一個關於業務優先順序的問題。管理學的普遍觀點是,作為組織的核心能力或競爭優勢的事物應該在內部完成,而非核心、常規或普通的事務則應交給供應商處理 [[19](ch01.html#Fournier2021)]。舉一個極端的例子,大多數公司不會自己發電(除非它們是能源公司,且不考慮緊急備用電力),因為從電網購買電力更便宜。
|
||||
|
||||
在軟體方面,需要做出的兩個重要決策是誰來構建軟體以及誰來部署它。有一個將每個決策外包出去的可能性的範圍,如[圖 1-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#fig_cloud_spectrum)所示。一個極端是你編寫並在內部執行的定製軟體;另一個極端是廣泛使用的雲服務或軟體即服務(SaaS)產品,由外部供應商實施和操作,你只能透過Web介面或API訪問。
|
||||
在軟體方面,需要做出的兩個重要決策是誰來構建軟體以及誰來部署它。有一個將每個決策外包出去的可能性的範圍,如[圖 1-2](ch01.html#fig_cloud_spectrum)所示。一個極端是你編寫並在內部執行的定製軟體;另一個極端是廣泛使用的雲服務或軟體即服務(SaaS)產品,由外部供應商實施和操作,你只能透過Web介面或API訪問。
|
||||
|
||||
With anything that an organization needs to do, one of the first questions is: should it be done in-house, or should it be outsourced? Should you build or should you buy?
|
||||
|
||||
Ultimately, this is a question about business priorities. The received management wisdom is that things that are a core competency or a competitive advantage of your organization should be done in-house, whereas things that are non-core, routine, or commonplace should be left to a vendor [[19](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fournier2021)]. To give an extreme example, most companies do not generate their own electricity (unless they are an energy company, and leaving aside emergency backup power), since it is cheaper to buy electricity from the grid.
|
||||
Ultimately, this is a question about business priorities. The received management wisdom is that things that are a core competency or a competitive advantage of your organization should be done in-house, whereas things that are non-core, routine, or commonplace should be left to a vendor [[19](ch01.html#Fournier2021)]. To give an extreme example, most companies do not generate their own electricity (unless they are an energy company, and leaving aside emergency backup power), since it is cheaper to buy electricity from the grid.
|
||||
|
||||
With software, two important decisions to be made are who builds the software and who deploys it. There is a spectrum of possibilities that outsource each decision to various degrees, as illustrated in [Figure 1-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#fig_cloud_spectrum). At one extreme is bespoke software that you write and run in-house; at the other extreme are widely-used cloud services or Software as a Service (SaaS) products that are implemented and operated by an external vendor, and which you only access through a web interface or API.
|
||||
With software, two important decisions to be made are who builds the software and who deploys it. There is a spectrum of possibilities that outsource each decision to various degrees, as illustrated in [Figure 1-2](ch01.html#fig_cloud_spectrum). At one extreme is bespoke software that you write and run in-house; at the other extreme are widely-used cloud services or Software as a Service (SaaS) products that are implemented and operated by an external vendor, and which you only access through a web interface or API.
|
||||
|
||||

|
||||
|
||||
@ -355,9 +355,9 @@ Seperately from this spectrum there is also the question of *how* you deploy ser
|
||||
|
||||
使用雲服務,而不是自己執行可比軟體,本質上是將該軟體的運營外包給雲提供商。支援和反對使用雲服務的理由都很充分。雲提供商聲稱使用他們的服務可以節省時間和金錢,並允許你比建立自己的基礎設施更快地行動。
|
||||
|
||||
雲服務是否實際上比自託管更便宜和更容易,很大程度上取決於你的技能和系統的工作負載。如果你已經有設定和操作所需系統的經驗,並且你的負載相當可預測(即,你需要的機器數量不會劇烈波動),那麼通常購買自己的機器並自己執行軟體會更便宜 [[20](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#HeinemeierHansson2022), [21](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Badizadegan2022)]。
|
||||
雲服務是否實際上比自託管更便宜和更容易,很大程度上取決於你的技能和系統的工作負載。如果你已經有設定和操作所需系統的經驗,並且你的負載相當可預測(即,你需要的機器數量不會劇烈波動),那麼通常購買自己的機器並自己執行軟體會更便宜 [[20](ch01.html#HeinemeierHansson2022), [21](ch01.html#Badizadegan2022)]。
|
||||
|
||||
另一方面,如果你需要一個你不知道如何部署和操作的系統,那麼採用雲服務通常比自己學習管理系統更容易且更快。如果你必須僱傭並培訓專門的員工來維護和業務系統,這可能非常昂貴。當你使用雲時,仍然需要一個運營團隊(見[“雲時代的運營”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_operations)),但將基本的系統管理外包可以釋放你的團隊,專注於更高層次的問題。
|
||||
另一方面,如果你需要一個你不知道如何部署和操作的系統,那麼採用雲服務通常比自己學習管理系統更容易且更快。如果你必須僱傭並培訓專門的員工來維護和業務系統,這可能非常昂貴。當你使用雲時,仍然需要一個運營團隊(見[“雲時代的運營”](ch01.html#sec_introduction_operations)),但將基本的系統管理外包可以釋放你的團隊,專注於更高層次的問題。
|
||||
|
||||
當你將系統的運營外包給專門運營該服務的公司時,這可能會帶來更好的服務,因為提供商從為許多客戶提供服務中獲得運營專長。另一方面,如果你自己執行服務,你可以配置並調整它以在你特定的工作負載上表現良好;雲服務不太可能願意代表你進行此類定製。
|
||||
|
||||
@ -370,15 +370,15 @@ Seperately from this spectrum there is also the question of *how* you deploy ser
|
||||
- 如果它缺少你需要的功能,你唯一能做的就是禮貌地詢問供應商是否會新增它;你通常無法自己實現它。
|
||||
- 如果服務出現故障,你只能等待它恢復。
|
||||
- 如果你以某種方式使用服務,觸發了一個錯誤或導致效能問題,你很難診斷問題。對於你自己執行的軟體,你可以從業務系統獲取效能指標和除錯資訊來幫助你瞭解其行為,你可以檢視伺服器日誌,但使用供應商託管的服務時,你通常無法訪問這些內部資訊。
|
||||
- 此外,如果服務關閉或變得無法接受地昂貴,或者如果供應商決定以你不喜歡的方式更改其產品,你將受制於他們——繼續執行軟體的舊版本通常不是一個選項,因此你將被迫遷移到另一個服務 [[22](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Yegge2020)]。如果有提供相容API的替代服務,這種風險可以緩解,但對於許多雲服務,沒有標準的API,這增加了切換的成本,使供應商鎖定成為一個問題。
|
||||
- 此外,如果服務關閉或變得無法接受地昂貴,或者如果供應商決定以你不喜歡的方式更改其產品,你將受制於他們——繼續執行軟體的舊版本通常不是一個選項,因此你將被迫遷移到另一個服務 [[22](ch01.html#Yegge2020)]。如果有提供相容API的替代服務,這種風險可以緩解,但對於許多雲服務,沒有標準的API,這增加了切換的成本,使供應商鎖定成為一個問題。
|
||||
|
||||
儘管存在這些風險,組織構建基於雲服務的新應用變得越來越流行。然而,雲服務並不能取代所有的內部資料系統:許多舊系統早於雲技術,且對於那些現有云服務無法滿足的特殊需求,內部系統仍然是必需的。例如,像高頻交易這樣對延遲極其敏感的應用需要完全控制硬體。
|
||||
|
||||
Using a cloud service, rather than running comparable software yourself, essentially outsources the operation of that software to the cloud provider. There are good arguments for and against cloud services. Cloud providers claim that using their services saves you time and money, and allows you to move faster compared to setting up your own infrastructure.
|
||||
|
||||
Whether a cloud service is actually cheaper and easier than self-hosting depends very much on your skills and the workload on your systems. If you already have experience setting up and operating the systems you need, and if your load is quite predictable (i.e., the number of machines you need does not fluctuate wildly), then it’s often cheaper to buy your own machines and run the software on them yourself [[20](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#HeinemeierHansson2022), [21](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Badizadegan2022)].
|
||||
Whether a cloud service is actually cheaper and easier than self-hosting depends very much on your skills and the workload on your systems. If you already have experience setting up and operating the systems you need, and if your load is quite predictable (i.e., the number of machines you need does not fluctuate wildly), then it’s often cheaper to buy your own machines and run the software on them yourself [[20](ch01.html#HeinemeierHansson2022), [21](ch01.html#Badizadegan2022)].
|
||||
|
||||
On the other hand, if you need a system that you don’t already know how to deploy and operate, then adopting a cloud service is often easier and quicker than learning to manage the system yourself. If you have to hire and train staff specifically to maintain and operate the system, that can get very expensive. You still need an operations team when you’re using the cloud (see [“Operations in the Cloud Era”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_operations)), but outsourcing the basic system administration can free up your team to focus on higher-level concerns.
|
||||
On the other hand, if you need a system that you don’t already know how to deploy and operate, then adopting a cloud service is often easier and quicker than learning to manage the system yourself. If you have to hire and train staff specifically to maintain and operate the system, that can get very expensive. You still need an operations team when you’re using the cloud (see [“Operations in the Cloud Era”](ch01.html#sec_introduction_operations)), but outsourcing the basic system administration can free up your team to focus on higher-level concerns.
|
||||
|
||||
When you outsource the operation of a system to a company that specializes in running that service, that can potentially result in a better service, since the provider gains operational expertise from providing the service to many customers. On the other hand, if you run the service yourself, you can configure and tune it to perform well on your particular workload; it is unlikely that a cloud service would be willing to make such customizations on your behalf.
|
||||
|
||||
@ -391,7 +391,7 @@ The biggest downside of a cloud service is that you have no control over it:
|
||||
- If it is lacking a feature you need, all you can do is to politely ask the vendor whether they will add it; you generally cannot implement it yourself.
|
||||
- If the service goes down, all you can do is to wait for it to recover.
|
||||
- If you are using the service in a way that triggers a bug or causes performance problems, it will be difficult for you to diagnose the issue. With software that you run yourself, you can get performance metrics and debugging information from the operating system to help you understand its behavior, and you can look at the server logs, but with a service hosted by a vendor you usually do not have access to these internals.
|
||||
- Moreover, if the service shuts down or becomes unacceptably expensive, or if the vendor decides to change their product in a way you don’t like, you are at their mercy—continuing to run an old version of the software is usually not an option, so you will be forced to migrate to an alternative service [[22](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Yegge2020)]. This risk is mitigated if there are alternative services that expose a compatible API, but for many cloud services there are no standard APIs, which raises the cost of switching, making vendor lock-in a problem.
|
||||
- Moreover, if the service shuts down or becomes unacceptably expensive, or if the vendor decides to change their product in a way you don’t like, you are at their mercy—continuing to run an old version of the software is usually not an option, so you will be forced to migrate to an alternative service [[22](ch01.html#Yegge2020)]. This risk is mitigated if there are alternative services that expose a compatible API, but for many cloud services there are no standard APIs, which raises the cost of switching, making vendor lock-in a problem.
|
||||
|
||||
Despite all these risks, it has become more and more popular for organizations to build new applications on top of cloud services. However, cloud services will not subsume all in-house data systems: many older systems predate the cloud, and for any services that have specialist requirements that existing cloud services cannot meet, in-house systems remain necessary. For example, very latency-sensitive applications such as high-frequency trading require full control of the hardware.
|
||||
|
||||
@ -402,21 +402,16 @@ Despite all these risks, it has become more and more popular for organizations t
|
||||
|
||||
除了經濟模式的不同(訂閱服務而非購買硬體並在其上執行許可軟體),雲計算的興起還在技術層面深刻影響了資料系統的實施方式。*雲原生* 一詞用來描述一種旨在利用雲服務優勢的架構。
|
||||
|
||||
原則上,幾乎任何你可以自行託管的軟體也可以作為雲服務提供,實際上,許多流行的資料系統現在已經有了這樣的託管服務。然而,從底層設計為雲原生的系統顯示出多項優勢:在相同硬體上有更好的效能,從失敗中更快恢復,能迅速擴充套件計算資源以匹配負載,並支援更大的資料集[[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Verbitski2017), [24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Antonopoulos2019_ch1), [25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020)]。[表 1-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#tab_cloud_native_dbs)列出了這兩類系統的一些例子。
|
||||
原則上,幾乎任何你可以自行託管的軟體也可以作為雲服務提供,實際上,許多流行的資料系統現在已經有了這樣的託管服務。然而,從底層設計為雲原生的系統顯示出多項優勢:在相同硬體上有更好的效能,從失敗中更快恢復,能迅速擴充套件計算資源以匹配負載,並支援更大的資料集[[23](ch01.html#Verbitski2017), [24](ch01.html#Antonopoulos2019_ch1), [25](ch01.html#Vuppalapati2020)]。[表 1-2](ch01.html#tab_cloud_native_dbs)列出了這兩類系統的一些例子。
|
||||
|
||||
| 類別 | 自託管系統 | 雲原生系統 |
|
||||
|----------|-----------------------------|---------------------------------------------------------------------|
|
||||
| 操作型/OLTP | MySQL, PostgreSQL, MongoDB | AWS Aurora 【23】, Azure SQL DB Hyperscale 【24】, Google Cloud Spanner |
|
||||
| 事務型/OLTP | MySQL, PostgreSQL, MongoDB | AWS Aurora 【23】, Azure SQL DB Hyperscale 【24】, Google Cloud Spanner |
|
||||
| 分析型/OLAP | Teradata, ClickHouse, Spark | Snowflake 【25】, Google BigQuery, Azure Synapse Analytics |
|
||||
|
||||
Besides having a different economic model (subscribing to a service instead of buying hardware and licensing software to run on it), the rise of the cloud has also had a profound effect on how data systems are implemented on a technical level. The term *cloud-native* is used to describe an architecture that is designed to take advantage of cloud services.
|
||||
|
||||
In principle, almost any software that you can self-host could also be provided as a cloud service, and indeed such managed services are now available for many popular data systems. However, systems that have been designed from the ground up to be cloud-native have been shown to have several advantages: better performance on the same hardware, faster recovery from failures, being able to quickly scale computing resources to match the load, and supporting larger datasets [[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Verbitski2017), [24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Antonopoulos2019_ch1), [25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020)]. [Table 1-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#tab_cloud_native_dbs) lists some examples of both types of systems.
|
||||
|
||||
| Category | Self-hosted systems | Cloud-native systems |
|
||||
|:-----------------|:----------------------------|:--------------------------------------------------------------------|
|
||||
| Operational/OLTP | MySQL, PostgreSQL, MongoDB | AWS Aurora 【23】, Azure SQL DB Hyperscale 【24】, Google Cloud Spanner |
|
||||
| Analytical/OLAP | Teradata, ClickHouse, Spark | Snowflake 【25】, Google BigQuery, Azure Synapse Analytics |
|
||||
In principle, almost any software that you can self-host could also be provided as a cloud service, and indeed such managed services are now available for many popular data systems. However, systems that have been designed from the ground up to be cloud-native have been shown to have several advantages: better performance on the same hardware, faster recovery from failures, being able to quickly scale computing resources to match the load, and supporting larger datasets [[23](ch01.html#Verbitski2017), [24](ch01.html#Antonopoulos2019_ch1), [25](ch01.html#Vuppalapati2020)]. [Table 1-2](ch01.html#tab_cloud_native_dbs) lists some examples of both types of systems.
|
||||
|
||||
|
||||
|
||||
@ -429,7 +424,7 @@ In principle, almost any software that you can self-host could also be provided
|
||||
相比之下,雲原生服務的關鍵思想是不僅使用由業務系統管理的計算資源,還要構建在更低層級的雲服務之上,建立更高層級的服務。例如:
|
||||
|
||||
- *物件儲存*服務,如亞馬遜 S3、Azure Blob 儲存和 Cloudflare R2 儲存大檔案。它們提供的 API 比典型檔案系統的 API 更有限(基本的檔案讀寫),但它們的優勢在於隱藏了底層的物理機器:服務自動將資料分佈在許多機器上,因此你無需擔心任何一臺機器上的磁碟空間耗盡。即使某些機器或其磁碟完全失敗,也不會丟失資料。
|
||||
- 許多其他服務又是建立在物件儲存和其他雲服務之上的:例如,Snowflake 是一種基於雲的分析資料庫(資料倉庫),依賴於 S3 進行資料儲存 [[25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020)],還有一些服務又建立在 Snowflake 之上。
|
||||
- 許多其他服務又是建立在物件儲存和其他雲服務之上的:例如,Snowflake 是一種基於雲的分析資料庫(資料倉庫),依賴於 S3 進行資料儲存 [[25](ch01.html#Vuppalapati2020)],還有一些服務又建立在 Snowflake 之上。
|
||||
|
||||
正如計算中的抽象總是一樣,關於你應該使用什麼,沒有一個正確的答案。一般規則是,更高層次的抽象往往更針對特定用例。如果你的需求與更高層系統設計的情況匹配,使用現有的更高層系統可能會比從更低層系統自行構建省去許多麻煩。另一方面,如果沒有高層系統滿足你的需求,那麼自己從更低層元件構建是唯一的選擇。
|
||||
|
||||
@ -440,7 +435,7 @@ In a cloud, this type of software can be run on an Infrastructure-as-a-Service e
|
||||
In contrast, the key idea of cloud-native services is to use not only the computing resources managed by your operating system, but also to build upon lower-level cloud services to create higher-level services. For example:
|
||||
|
||||
- *Object storage* services such as Amazon S3, Azure Blob Storage, and Cloudflare R2 store large files. They provide more limited APIs than a typical filesystem (basic file reads and writes), but they have the advantage that they hide the underlying physical machines: the service automatically distributes the data across many machines, so that you don’t have to worry about running out of disk space on any one machine. Even if some machines or their disks fail entirely, no data is lost.
|
||||
- Many other services are in turn built upon object storage and other cloud services: for example, Snowflake is a cloud-based analytic database (data warehouse) that relies on S3 for data storage [[25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020)], and some other services in turn build upon Snowflake.
|
||||
- Many other services are in turn built upon object storage and other cloud services: for example, Snowflake is a cloud-based analytic database (data warehouse) that relies on S3 for data storage [[25](ch01.html#Vuppalapati2020)], and some other services in turn build upon Snowflake.
|
||||
|
||||
As always with abstractions in computing, there is no one right answer to what you should use. As a general rule, higher-level abstractions tend to be more oriented towards particular use cases. If your needs match the situations for which a higher-level system is designed, using the existing higher-level system will probably provide what you need with much less hassle than building it yourself from lower-level systems. On the other hand, if there is no high-level system that meets your needs, then building it yourself from lower-level components is the only option.
|
||||
|
||||
@ -450,34 +445,34 @@ As always with abstractions in computing, there is no one right answer to what y
|
||||
|
||||
在傳統計算中,磁碟儲存被視為持久的(我們假設一旦某些內容被寫入磁碟,它就不會丟失);為了容忍單個硬碟的失敗,經常使用 RAID 來在幾個磁碟上維護資料的副本。在雲中,計算例項(虛擬機器)也可能有本地磁碟附加,但云原生系統通常將這些磁碟更像是臨時快取,而不是長期儲存。這是因為如果關聯例項失敗,或者為了適應負載變化而用更大或更小的例項替換例項(在不同的物理機上),本地磁碟將變得無法訪問。
|
||||
|
||||
作為本地磁碟的替代,雲服務還提供了可以從一個例項分離並連線到另一個例項的虛擬磁碟儲存(Amazon EBS、Azure 管理磁碟和 Google Cloud 中的持久磁碟)。這種虛擬磁碟實際上不是物理磁碟,而是由一組獨立機器提供的雲服務,模擬磁碟(塊裝置)的行為(每個塊通常為 4 KiB 大小)。這項技術使得在雲中執行傳統基於磁碟的軟體成為可能,但它通常表現出較差的效能和可擴充套件性 [[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Verbitski2017)]。
|
||||
作為本地磁碟的替代,雲服務還提供了可以從一個例項分離並連線到另一個例項的虛擬磁碟儲存(Amazon EBS、Azure 管理磁碟和 Google Cloud 中的持久磁碟)。這種虛擬磁碟實際上不是物理磁碟,而是由一組獨立機器提供的雲服務,模擬磁碟(塊裝置)的行為(每個塊通常為 4 KiB 大小)。這項技術使得在雲中執行傳統基於磁碟的軟體成為可能,但它通常表現出較差的效能和可擴充套件性 [[23](ch01.html#Verbitski2017)]。
|
||||
|
||||
為解決這個問題,雲原生服務通常避免使用虛擬磁碟,而是建立在專門為特定工作負載最佳化的專用儲存服務之上。如 S3 等物件儲存服務旨在長期儲存相對較大的檔案,大小從數百千位元組到幾個千兆位元組不等。儲存在資料庫中的單獨行或值通常比這小得多;因此雲資料庫通常在單獨的服務中管理更小的值,並在物件儲存中儲存更大的資料塊(包含許多單獨的值) [[24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Antonopoulos2019_ch1)]。
|
||||
為解決這個問題,雲原生服務通常避免使用虛擬磁碟,而是建立在專門為特定工作負載最佳化的專用儲存服務之上。如 S3 等物件儲存服務旨在長期儲存相對較大的檔案,大小從數百千位元組到幾個千兆位元組不等。儲存在資料庫中的單獨行或值通常比這小得多;因此雲資料庫通常在單獨的服務中管理更小的值,並在物件儲存中儲存更大的資料塊(包含許多單獨的值) [[24](ch01.html#Antonopoulos2019_ch1)]。
|
||||
|
||||
在傳統的系統架構中,同一臺計算機負責儲存(磁碟)和計算(CPU 和 RAM),但在雲原生系統中,這兩種責任已經有所分離或*解耦* [[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Prout2022), [25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020), [26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/例如,S3僅儲存檔案,如果你想分析那些資料,你將不得不在 S3 外部的某處執行分析程式碼。這意味著需要透過網路傳輸資料,我們將在[“分散式與單節點系統”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed)中進一步討論這一點。
|
||||
在傳統的系統架構中,同一臺計算機負責儲存(磁碟)和計算(CPU 和 RAM),但在雲原生系統中,這兩種責任已經有所分離或*解耦* [[8](ch01.html#Prout2022), [25](ch01.html#Vuppalapati2020), [26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/)]。例如,S3僅儲存檔案,如果你想分析那些資料,你將不得不在 S3 外部的某處執行分析程式碼。這意味著需要透過網路傳輸資料,我們將在[“分散式與單節點系統”](ch01.html#sec_introduction_distributed)中進一步討論這一點。
|
||||
|
||||
此外,雲原生系統通常是*多租戶*的,這意味著它們不是為每個客戶配置單獨的機器,而是在同一共享硬體上由同一服務處理來自幾個不同客戶的資料和計算 [[28](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vanlightly2023)]。多租戶可以實現更好的硬體利用率、更容易的可擴充套件性和雲提供商更容易的管理,但它也需要精心的工程設計,以確保一個客戶的活動不影響系統對其他客戶的效能或安全性 [[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Jonas2019)]。
|
||||
此外,雲原生系統通常是*多租戶*的,這意味著它們不是為每個客戶配置單獨的機器,而是在同一共享硬體上由同一服務處理來自幾個不同客戶的資料和計算 [[28](ch01.html#Vanlightly2023)]。多租戶可以實現更好的硬體利用率、更容易的可擴充套件性和雲提供商更容易的管理,但它也需要精心的工程設計,以確保一個客戶的活動不影響系統對其他客戶的效能或安全性 [[29](ch01.html#Jonas2019)]。
|
||||
|
||||
In traditional computing, disk storage is regarded as durable (we assume that once something is written to disk, it will not be lost); to tolerate the failure of an individual hard disk, RAID is often used to maintain copies of the data on several disks. In the cloud, compute instances (virtual machines) may also have local disks attached, but cloud-native systems typically treat these disks more like an ephemeral cache, and less like long-term storage. This is because the local disk becomes inaccessible if the associated instance fails, or if the instance is replaced with a bigger or a smaller one (on a different physical machine) in order to adapt to changes in load.
|
||||
|
||||
As an alternative to local disks, cloud services also offer virtual disk storage that can be detached from one instance and attached to a different one (Amazon EBS, Azure managed disks, and persistent disks in Google Cloud). Such a virtual disk is not actually a physical disk, but rather a cloud service provided by a separate set of machines, which emulates the behavior of a disk (a *block device*, where each block is typically 4 KiB in size). This technology makes it possible to run traditional disk-based software in the cloud, but it often suffers from poor performance and poor scalability [[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Verbitski2017)].
|
||||
As an alternative to local disks, cloud services also offer virtual disk storage that can be detached from one instance and attached to a different one (Amazon EBS, Azure managed disks, and persistent disks in Google Cloud). Such a virtual disk is not actually a physical disk, but rather a cloud service provided by a separate set of machines, which emulates the behavior of a disk (a *block device*, where each block is typically 4 KiB in size). This technology makes it possible to run traditional disk-based software in the cloud, but it often suffers from poor performance and poor scalability [[23](ch01.html#Verbitski2017)].
|
||||
|
||||
To address this problem, cloud-native services generally avoid using virtual disks, and instead build on dedicated storage services that are optimized for particular workloads. Object storage services such as S3 are designed for long-term storage of fairly large files, ranging from hundreds of kilobytes to several gigabytes in size. The individual rows or values stored in a database are typically much smaller than this; cloud databases therefore typically manage smaller values in a separate service, and store larger data blocks (containing many individual values) in an object store [[24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Antonopoulos2019_ch1)].
|
||||
To address this problem, cloud-native services generally avoid using virtual disks, and instead build on dedicated storage services that are optimized for particular workloads. Object storage services such as S3 are designed for long-term storage of fairly large files, ranging from hundreds of kilobytes to several gigabytes in size. The individual rows or values stored in a database are typically much smaller than this; cloud databases therefore typically manage smaller values in a separate service, and store larger data blocks (containing many individual values) in an object store [[24](ch01.html#Antonopoulos2019_ch1)].
|
||||
|
||||
In a traditional systems architecture, the same computer is responsible for both storage (disk) and computation (CPU and RAM), but in cloud-native systems, these two responsibilities have become somewhat separated or *disaggregated* [[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Prout2022), [25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020), [26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shapira2023), [27](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Murthy2022)]: for example, S3 only stores files, and if you want to analyze that data, you will have to run the analysis code somewhere outside of S3. This implies transferring the data over the network, which we will discuss further in [“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed).
|
||||
In a traditional systems architecture, the same computer is responsible for both storage (disk) and computation (CPU and RAM), but in cloud-native systems, these two responsibilities have become somewhat separated or *disaggregated* [[8](ch01.html#Prout2022), [25](ch01.html#Vuppalapati2020), [26](ch01.html#Shapira2023), [27](ch01.html#Murthy2022)]: for example, S3 only stores files, and if you want to analyze that data, you will have to run the analysis code somewhere outside of S3. This implies transferring the data over the network, which we will discuss further in [“Distributed versus Single-Node Systems”](ch01.html#sec_introduction_distributed).
|
||||
|
||||
Moreover, cloud-native systems are often *multitenant*, which means that rather than having a separate machine for each customer, data and computation from several different customers are handled on the same shared hardware by the same service [[28](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vanlightly2023)]. Multitenancy can enable better hardware utilization, easier scalability, and easier management by the cloud provider, but it also requires careful engineering to ensure that one customer’s activity does not affect the performance or security of the system for other customers [[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Jonas2019)].
|
||||
Moreover, cloud-native systems are often *multitenant*, which means that rather than having a separate machine for each customer, data and computation from several different customers are handled on the same shared hardware by the same service [[28](ch01.html#Vanlightly2023)]. Multitenancy can enable better hardware utilization, easier scalability, and easier management by the cloud provider, but it also requires careful engineering to ensure that one customer’s activity does not affect the performance or security of the system for other customers [[29](ch01.html#Jonas2019)].
|
||||
|
||||
|
||||
--------
|
||||
|
||||
### 在雲時代的運營
|
||||
|
||||
傳統上,管理組織伺服器端資料基礎設施的人被稱為*資料庫管理員*(DBAs)或*系統管理員*(sysadmins)。近年來,許多組織試圖將軟體開發和運營的角色整合到一個團隊中,共同負責後端服務和資料基礎設施;*DevOps*哲學指導了這一趨勢。*站點可靠性工程師*(SREs)是谷歌實施這一理念的方式 [[30](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Beyer2016)]。
|
||||
傳統上,管理組織伺服器端資料基礎設施的人被稱為*資料庫管理員*(DBAs)或*系統管理員*(sysadmins)。近年來,許多組織試圖將軟體開發和運營的角色整合到一個團隊中,共同負責後端服務和資料基礎設施;*DevOps*哲學指導了這一趨勢。*站點可靠性工程師*(SREs)是谷歌實施這一理念的方式 [[30](ch01.html#Beyer2016)]。
|
||||
|
||||
運營的角色是確保服務可靠地交付給使用者(包括配置基礎設施和部署應用程式),並確保穩定的生產環境(包括監控和診斷可能影響可靠性的問題)。對於自託管系統,運營傳統上涉及大量單機層面的工作,如容量規劃(例如,監控可用磁碟空間並在空間用盡前新增更多磁碟)、配置新機器、將服務從一臺機器移至另一臺以及安裝業務系統補丁。
|
||||
|
||||
許多雲服務提供了一個API,隱藏了實際實現服務的單個機器。例如,雲端儲存用*計量計費*取代了固定大小的磁碟,您可以在不提前規劃容量需求的情況下儲存資料,並根據實際使用的空間收費。此外,許多雲服務即使單個機器失敗也能保持高可用性(見[“可靠性和容錯”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability))。
|
||||
許多雲服務提供了一個API,隱藏了實際實現服務的單個機器。例如,雲端儲存用*計量計費*取代了固定大小的磁碟,您可以在不提前規劃容量需求的情況下儲存資料,並根據實際使用的空間收費。此外,許多雲服務即使單個機器失敗也能保持高可用性(見[“可靠性和容錯”](ch02.html#sec_introduction_reliability))。
|
||||
|
||||
從單個機器到服務的這種重點轉變伴隨著運營角色的變化。提供可靠服務的高階目標仍然相同,但過程和工具已經演變。DevOps/SRE哲學更加強調:
|
||||
|
||||
@ -485,22 +480,22 @@ Moreover, cloud-native systems are often *multitenant*, which means that rather
|
||||
- 偏好短暫的虛擬機器和服務而不是長時間執行的伺服器,
|
||||
- 促進頻繁的應用更新,
|
||||
- 從事件中學習,
|
||||
- 即使個別人員來去,也要保留組織對系統的知識 [[31](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Limoncelli2020)]。
|
||||
- 即使個別人員來去,也要保留組織對系統的知識 [[31](ch01.html#Limoncelli2020)]。
|
||||
|
||||
隨著雲服務的興起,角色出現了分化:基礎設施公司的運營團隊專注於向大量客戶提供可靠服務的細節,而服務的客戶儘可能少地花時間和精力在基礎設施上 [[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Majors2020)]。
|
||||
隨著雲服務的興起,角色出現了分化:基礎設施公司的運營團隊專注於向大量客戶提供可靠服務的細節,而服務的客戶儘可能少地花時間和精力在基礎設施上 [[32](ch01.html#Majors2020)]。
|
||||
|
||||
雲服務的客戶仍然需要運營,但他們關注的方面不同,如選擇最適合特定任務的服務、將不同服務相互整合以及從一個服務遷移到另一個服務。儘管計量計費消除了傳統意義上的容量規劃的需要,但仍然重要的是瞭解您正在使用哪些資源以及用途,以免在不需要的雲資源上浪費金錢:容量規劃變成了財務規劃,效能最佳化變成了成本最佳化 [[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cherkasky2021)]。此外,雲服務確實有資源限制或*配額*(如您可以同時執行的最大程序數),您需要了解並計劃這些限制,以免遇到問題 [[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kushchi2023)]。
|
||||
雲服務的客戶仍然需要運營,但他們關注的方面不同,如選擇最適合特定任務的服務、將不同服務相互整合以及從一個服務遷移到另一個服務。儘管計量計費消除了傳統意義上的容量規劃的需要,但仍然重要的是瞭解您正在使用哪些資源以及用途,以免在不需要的雲資源上浪費金錢:容量規劃變成了財務規劃,效能最佳化變成了成本最佳化 [[33](ch01.html#Cherkasky2021)]。此外,雲服務確實有資源限制或*配額*(如您可以同時執行的最大程序數),您需要了解並計劃這些限制,以免遇到問題 [[34](ch01.html#Kushchi2023)]。
|
||||
|
||||
採用雲服務可能比執行自己的基礎設施更容易且更快,儘管即使在這裡,學習如何使用它和可能繞過其限制也有成本。隨著越來越多的供應商提供針對不同用例的更廣泛的雲服務,不同服務之間的整合成為特別的挑戰 [[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bernhardsson2021), [36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stancil2021)]。ETL(見[“資料倉庫”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_dwh))只是故事的一部分;運營雲服務也需要相互整合。目前缺乏促進此類整合的標準,因此它通常涉及大量的手動努力。
|
||||
採用雲服務可能比執行自己的基礎設施更容易且更快,儘管即使在這裡,學習如何使用它和可能繞過其限制也有成本。隨著越來越多的供應商提供針對不同用例的更廣泛的雲服務,不同服務之間的整合成為特別的挑戰 [[35](ch01.html#Bernhardsson2021), [36](ch01.html#Stancil2021)]。ETL(見[“資料倉庫”](ch01.html#sec_introduction_dwh))只是故事的一部分;運營雲服務也需要相互整合。目前缺乏促進此類整合的標準,因此它通常涉及大量的手動努力。
|
||||
|
||||
其他不能完全外包給雲服務的運營方面包括維護應用程式及其使用的庫的安全性、管理自己的服務之間的互動、監控服務的負載以及追蹤效能下降或中斷等問題的原因。雖然雲正在改變運營的角色,但運營的需求依舊迫切。
|
||||
|
||||
|
||||
Traditionally, the people managing an organization’s server-side data infrastructure were known as *database administrators* (DBAs) or *system administrators* (sysadmins). More recently, many organizations have tried to integrate the roles of software development and operations into teams with a shared responsibility for both backend services and data infrastructure; the *DevOps* philosophy has guided this trend. *Site Reliability Engineers* (SREs) are Google’s implementation of this idea [[30](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Beyer2016)].
|
||||
Traditionally, the people managing an organization’s server-side data infrastructure were known as *database administrators* (DBAs) or *system administrators* (sysadmins). More recently, many organizations have tried to integrate the roles of software development and operations into teams with a shared responsibility for both backend services and data infrastructure; the *DevOps* philosophy has guided this trend. *Site Reliability Engineers* (SREs) are Google’s implementation of this idea [[30](ch01.html#Beyer2016)].
|
||||
|
||||
The role of operations is to ensure services are reliably delivered to users (including configuring infrastructure and deploying applications), and to ensure a stable production environment (including monitoring and diagnosing any problems that may affect reliability). For self-hosted systems, operations traditionally involves a significant amount of work at the level of individual machines, such as capacity planning (e.g., monitoring available disk space and adding more disks before you run out of space), provisioning new machines, moving services from one machine to another, and installing operating system patches.
|
||||
|
||||
Many cloud services present an API that hides the individual machines that actually implement the service. For example, cloud storage replaces fixed-size disks with *metered billing*, where you can store data without planning your capacity needs in advance, and you are then charged based on the space actually used. Moreover, many cloud services remain highly available, even when individual machines have failed (see [“Reliability and Fault Tolerance”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability)).
|
||||
Many cloud services present an API that hides the individual machines that actually implement the service. For example, cloud storage replaces fixed-size disks with *metered billing*, where you can store data without planning your capacity needs in advance, and you are then charged based on the space actually used. Moreover, many cloud services remain highly available, even when individual machines have failed (see [“Reliability and Fault Tolerance”](ch02.html#sec_introduction_reliability)).
|
||||
|
||||
This shift in emphasis from individual machines to services has been accompanied by a change in the role of operations. The high-level goal of providing a reliable service remains the same, but the processes and tools have evolved. The DevOps/SRE philosophy places greater emphasis on:
|
||||
|
||||
@ -508,13 +503,13 @@ This shift in emphasis from individual machines to services has been accompanied
|
||||
- preferring ephemeral virtual machines and services over long running servers,
|
||||
- enabling frequent application updates,
|
||||
- learning from incidents, and
|
||||
- preserving the organization’s knowledge about the system, even as individual people come and go [[31](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Limoncelli2020)].
|
||||
- preserving the organization’s knowledge about the system, even as individual people come and go [[31](ch01.html#Limoncelli2020)].
|
||||
|
||||
With the rise of cloud services, there has been a bifurcation of roles: operations teams at infrastructure companies specialize in the details of providing a reliable service to a large number of customers, while the customers of the service spend as little time and effort as possible on infrastructure [[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Majors2020)].
|
||||
With the rise of cloud services, there has been a bifurcation of roles: operations teams at infrastructure companies specialize in the details of providing a reliable service to a large number of customers, while the customers of the service spend as little time and effort as possible on infrastructure [[32](ch01.html#Majors2020)].
|
||||
|
||||
Customers of cloud services still require operations, but they focus on different aspects, such as choosing the most appropriate service for a given task, integrating different services with each other, and migrating from one service to another. Even though metered billing removes the need for capacity planning in the traditional sense, it’s still important to know what resources you are using for which purpose, so that you don’t waste money on cloud resources that are not needed: capacity planning becomes financial planning, and performance optimization becomes cost optimization [[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cherkasky2021)]. Moreover, cloud services do have resource limits or *quotas* (such as the maximum number of processes you can run concurrently), which you need to know about and plan for before you run into them [[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kushchi2023)].
|
||||
Customers of cloud services still require operations, but they focus on different aspects, such as choosing the most appropriate service for a given task, integrating different services with each other, and migrating from one service to another. Even though metered billing removes the need for capacity planning in the traditional sense, it’s still important to know what resources you are using for which purpose, so that you don’t waste money on cloud resources that are not needed: capacity planning becomes financial planning, and performance optimization becomes cost optimization [[33](ch01.html#Cherkasky2021)]. Moreover, cloud services do have resource limits or *quotas* (such as the maximum number of processes you can run concurrently), which you need to know about and plan for before you run into them [[34](ch01.html#Kushchi2023)].
|
||||
|
||||
Adopting a cloud service can be easier and quicker than running your own infrastructure, although even here there is a cost in learning how to use it, and perhaps working around its limitations. Integration between different services becomes a particular challenge as a growing number of vendors offers an ever broader range of cloud services targeting different use cases [[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bernhardsson2021), [36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stancil2021)]. ETL (see [“Data Warehousing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_dwh)) is only part of the story; operational cloud services also need to be integrated with each other. At present, there is a lack of standards that would facilitate this sort of integration, so it often involves significant manual effort.
|
||||
Adopting a cloud service can be easier and quicker than running your own infrastructure, although even here there is a cost in learning how to use it, and perhaps working around its limitations. Integration between different services becomes a particular challenge as a growing number of vendors offers an ever broader range of cloud services targeting different use cases [[35](ch01.html#Bernhardsson2021), [36](ch01.html#Stancil2021)]. ETL (see [“Data Warehousing”](ch01.html#sec_introduction_dwh)) is only part of the story; operational cloud services also need to be integrated with each other. At present, there is a lack of standards that would facilitate this sort of integration, so it often involves significant manual effort.
|
||||
|
||||
Other operational aspects that cannot fully be outsourced to cloud services include maintaining the security of an application and the libraries it uses, managing the interactions between your own services, monitoring the load on your services, and tracking down the cause of problems such as performance degradations or outages. While the cloud is changing the role of operations, the need for operations is as great as ever.
|
||||
|
||||
@ -537,15 +532,15 @@ Other operational aspects that cannot fully be outsourced to cloud services incl
|
||||
|
||||
- 容錯/高可用性
|
||||
|
||||
如果您的應用程式需要在一臺機器(或多臺機器、網路或整個資料中心)宕機時仍然繼續工作,您可以使用多臺機器來提供冗餘。當一臺機器失敗時,另一臺可以接管。見[“可靠性和容錯”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability)。
|
||||
如果您的應用程式需要在一臺機器(或多臺機器、網路或整個資料中心)宕機時仍然繼續工作,您可以使用多臺機器來提供冗餘。當一臺機器失敗時,另一臺可以接管。見[“可靠性和容錯”](ch02.html#sec_introduction_reliability)。
|
||||
|
||||
- 可擴充套件性
|
||||
|
||||
如果您的資料量或計算需求超過單臺機器的處理能力,您可以將負載分散到多臺機器上。見[“可擴充套件性”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_scalability)。
|
||||
如果您的資料量或計算需求超過單臺機器的處理能力,您可以將負載分散到多臺機器上。見[“可擴充套件性”](ch02.html#sec_introduction_scalability)。
|
||||
|
||||
- 延遲
|
||||
|
||||
如果您的使用者遍佈全球,您可能希望在全球各地設定伺服器,以便每個使用者都可以從地理位置靠近他們的資料中心獲得服務。這避免了使用者必須等待網路包繞地球半圈來響應他們的請求。見[“描述效能”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_percentiles)。
|
||||
如果您的使用者遍佈全球,您可能希望在全球各地設定伺服器,以便每個使用者都可以從地理位置靠近他們的資料中心獲得服務。這避免了使用者必須等待網路包繞地球半圈來響應他們的請求。見[“描述效能”](ch02.html#sec_introduction_percentiles)。
|
||||
|
||||
- 彈性
|
||||
|
||||
@ -557,7 +552,7 @@ Other operational aspects that cannot fully be outsourced to cloud services incl
|
||||
|
||||
- 法律合規
|
||||
|
||||
一些國家有資料居留法律,要求在其管轄區內的人的資料必須在該國地理範圍內儲存和處理 [[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Korolov2022)]。這些規則的範圍各不相同——例如,在某些情況下,它僅適用於醫療或財務資料,而其他情況則更廣泛。因此,一個在幾個這樣的司法管轄區有使用者的服務將不得不將其資料分佈在幾個位置的伺服器上。
|
||||
一些國家有資料居留法律,要求在其管轄區內的人的資料必須在該國地理範圍內儲存和處理 [[37](ch01.html#Korolov2022)]。這些規則的範圍各不相同——例如,在某些情況下,它僅適用於醫療或財務資料,而其他情況則更廣泛。因此,一個在幾個這樣的司法管轄區有使用者的服務將不得不將其資料分佈在幾個位置的伺服器上。
|
||||
|
||||
這些原因適用於您自己編寫的服務(應用程式程式碼)和由現成軟體組成的服務(例如資料庫)。
|
||||
|
||||
@ -574,15 +569,15 @@ A system that involves several machines communicating via a network is called a
|
||||
|
||||
- Fault tolerance/high availability
|
||||
|
||||
If your application needs to continue working even if one machine (or several machines, or the network, or an entire datacenter) goes down, you can use multiple machines to give you redundancy. When one fails, another one can take over. See [“Reliability and Fault Tolerance”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability).
|
||||
If your application needs to continue working even if one machine (or several machines, or the network, or an entire datacenter) goes down, you can use multiple machines to give you redundancy. When one fails, another one can take over. See [“Reliability and Fault Tolerance”](ch02.html#sec_introduction_reliability).
|
||||
|
||||
- Scalability
|
||||
|
||||
If your data volume or computing requirements grow bigger than a single machine can handle, you can potentially spread the load across multiple machines. See [“Scalability”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_scalability).
|
||||
If your data volume or computing requirements grow bigger than a single machine can handle, you can potentially spread the load across multiple machines. See [“Scalability”](ch02.html#sec_introduction_scalability).
|
||||
|
||||
- Latency
|
||||
|
||||
If you have users around the world, you might want to have servers at various locations worldwide so that each user can be served from a datacenter that is geographically close to them. That avoids the users having to wait for network packets to travel halfway around the world to answer their requests. See [“Describing Performance”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_percentiles).
|
||||
If you have users around the world, you might want to have servers at various locations worldwide so that each user can be served from a datacenter that is geographically close to them. That avoids the users having to wait for network packets to travel halfway around the world to answer their requests. See [“Describing Performance”](ch02.html#sec_introduction_percentiles).
|
||||
|
||||
- Elasticity
|
||||
|
||||
@ -594,7 +589,7 @@ A system that involves several machines communicating via a network is called a
|
||||
|
||||
- Legal compliance
|
||||
|
||||
Some countries have data residency laws that require data about people in their jurisdiction to be stored and processed geographically within that country [[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Korolov2022)]. The scope of these rules varies—for example, in some cases it applies only to medical or financial data, while other cases are broader. A service with users in several such jurisdictions will therefore have to distribute their data across servers in several locations.
|
||||
Some countries have data residency laws that require data about people in their jurisdiction to be stored and processed geographically within that country [[37](ch01.html#Korolov2022)]. The scope of these rules varies—for example, in some cases it applies only to medical or financial data, while other cases are broader. A service with users in several such jurisdictions will therefore have to distribute their data across servers in several locations.
|
||||
|
||||
These reasons apply both to services that you write yourself (application code) and services consisting of off-the-shelf software (such as databases).
|
||||
|
||||
@ -622,7 +617,7 @@ These reasons apply both to services that you write yourself (application code)
|
||||
|
||||
分散式系統通常將系統分佈在多臺機器上,最常見的方式是將它們分為客戶端和伺服器,並讓客戶端向伺服器發出請求。如我們將在[連結待補充]中討論的,這種通訊最常使用HTTP。同一個過程可能既是伺服器(處理傳入請求)也是客戶端(向其他服務發出傳出請求)。
|
||||
|
||||
這種構建應用程式的方式傳統上被稱為*面向服務的架構*(SOA);最近這個想法被細化為*微服務*架構 [[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Newman2021_ch1), [46](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Richardson2014)]。在這種架構中,每個服務都有一個明確定義的目的(例如,在S3的情況下,這將是檔案儲存);每個服務都暴露一個可以透過網路由客戶端呼叫的API,並且每個服務都有一個負責其維護的團隊。因此,一個複雜的應用程式可以被分解為多個互動的服務,每個服務由一個單獨的團隊管理。
|
||||
這種構建應用程式的方式傳統上被稱為*面向服務的架構*(SOA);最近這個想法被細化為*微服務*架構 [[45](ch01.html#Newman2021_ch1), [46](ch01.html#Richardson2014)]。在這種架構中,每個服務都有一個明確定義的目的(例如,在S3的情況下,這將是檔案儲存);每個服務都暴露一個可以透過網路由客戶端呼叫的API,並且每個服務都有一個負責其維護的團隊。因此,一個複雜的應用程式可以被分解為多個互動的服務,每個服務由一個單獨的團隊管理。
|
||||
|
||||
將複雜的軟體分解為多個服務有幾個優點:每個服務都可以獨立更新,減少團隊間的協調工作;每個服務可以被分配其所需的硬體資源;透過在API後面隱藏實現細節,服務所有者可以自由更改實現,而不影響客戶端。在資料儲存方面,通常每個服務都有自己的資料庫,並且服務之間不共享資料庫:共享資料庫將有效地使整個資料庫結構成為服務API的一部分,然後更改該結構將會很困難。共享的資料庫還可能導致一個服務的查詢負面影響其他服務的效能。
|
||||
|
||||
@ -630,15 +625,15 @@ These reasons apply both to services that you write yourself (application code)
|
||||
|
||||
微服務API的演進可能具有挑戰性。呼叫API的客戶端希望API具有某些欄位。開發人員可能希望根據業務需求的變化新增或刪除API中的欄位,但這樣做可能導致客戶端失敗。更糟糕的是,這種失敗通常直到開發週期後期,當更新的服務API部署到暫存或生產環境時才被發現。API描述標準如OpenAPI和gRPC有助於管理客戶端和伺服器API之間的關係;我們將在[連結待補充]中進一步討論這些內容。
|
||||
|
||||
微服務主要是對人的問題的技術解決方案:允許不同團隊獨立進展,無需彼此協調。這在大公司中很有價值,但在小公司中,如果沒有許多團隊,使用微服務可能是不必要的開銷,更傾向於以最簡單的方式實現應用程式 [[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Newman2021_ch1)]。
|
||||
微服務主要是對人的問題的技術解決方案:允許不同團隊獨立進展,無需彼此協調。這在大公司中很有價值,但在小公司中,如果沒有許多團隊,使用微服務可能是不必要的開銷,更傾向於以最簡單的方式實現應用程式 [[45](ch01.html#Newman2021_ch1)]。
|
||||
|
||||
*無伺服器*,或*功能即服務*(FaaS),是部署服務的另一種方法,其中基礎設施的管理被外包給雲供應商 [[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Jonas2019)]。使用虛擬機器時,您必須明確選擇何時啟動或關閉例項;相比之下,在無伺服器模型中,雲提供商根據對您服務的傳入請求,自動分配和釋放硬體資源 [[47](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shahrad2020)]。“無伺服器”的術語可能會產生誤導:每個無伺服器功能執行仍然在伺服器上執行,但後續執行可能在不同的伺服器上進行。
|
||||
*無伺服器*,或*功能即服務*(FaaS),是部署服務的另一種方法,其中基礎設施的管理被外包給雲供應商 [[29](ch01.html#Jonas2019)]。使用虛擬機器時,您必須明確選擇何時啟動或關閉例項;相比之下,在無伺服器模型中,雲提供商根據對您服務的傳入請求,自動分配和釋放硬體資源 [[47](ch01.html#Shahrad2020)]。“無伺服器”的術語可能會產生誤導:每個無伺服器功能執行仍然在伺服器上執行,但後續執行可能在不同的伺服器上進行。
|
||||
|
||||
就像雲端儲存用計量計費模式取代了容量規劃(提前決定購買多少硬碟)一樣,無伺服器方法正在將計量計費帶到程式碼執行:您只需為應用程式程式碼實際執行的時間付費,而不必提前預配資源。
|
||||
|
||||
The most common way of distributing a system across multiple machines is to divide them into clients and servers, and let the clients make requests to the servers. Most commonly HTTP is used for this communication, as we will discuss in [Link to Come]. The same process may be both a server (handling incoming requests) and a client (making outbound requests to other services).
|
||||
|
||||
This way of building applications has traditionally been called a *service-oriented architecture* (SOA); more recently the idea has been refined into a *microservices* architecture [[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Newman2021_ch1), [46](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Richardson2014)]. In this architecture, a service has one well-defined purpose (for example, in the case of S3, this would be file storage); each service exposes an API that can be called by clients via the network, and each service has one team that is responsible for its maintenance. A complex application can thus be decomposed into multiple interacting services, each managed by a separate team.
|
||||
This way of building applications has traditionally been called a *service-oriented architecture* (SOA); more recently the idea has been refined into a *microservices* architecture [[45](ch01.html#Newman2021_ch1), [46](ch01.html#Richardson2014)]. In this architecture, a service has one well-defined purpose (for example, in the case of S3, this would be file storage); each service exposes an API that can be called by clients via the network, and each service has one team that is responsible for its maintenance. A complex application can thus be decomposed into multiple interacting services, each managed by a separate team.
|
||||
|
||||
There are several advantages to breaking down a complex piece of software into multiple services: each service can be updated independently, reducing coordination effort among teams; each service can be assigned the hardware resources it needs; and by hiding the implementation details behind an API, the service owners are free to change the implementation without affecting clients. In terms of data storage, it is common for each service to have its own databases, and not to share databases between services: sharing a database would effectively make the entire database structure a part of the service’s API, and then that structure would be difficult to change. Shared databases could also cause one service’s queries to negatively impact the performance of other services.
|
||||
|
||||
@ -646,9 +641,9 @@ On the other hand, having many services can itself breed complexity: each servic
|
||||
|
||||
Microservice APIs can be challenging to evolve. Clients that call an API expect the API to have certain fields. Developers might wish to add or remove fields to an API as business needs change, but doing so can cause clients to fail. Worse still, such failures are often not discovered until late in the development cycle when the updated service API is deployed to a staging or production environment. API description standards such as OpenAPI and gRPC help manage the relationship between client and server APIs; we discuss these further in [Link to Come].
|
||||
|
||||
Microservices are primarily a technical solution to a people problem: allowing different teams to make progress independently without having to coordinate with each other. This is valuable in a large company, but in a small company where there are not many teams, using microservices is likely to be unnecessary overhead, and it is preferable to implement the application in the simplest way possible [[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Newman2021_ch1)].
|
||||
Microservices are primarily a technical solution to a people problem: allowing different teams to make progress independently without having to coordinate with each other. This is valuable in a large company, but in a small company where there are not many teams, using microservices is likely to be unnecessary overhead, and it is preferable to implement the application in the simplest way possible [[45](ch01.html#Newman2021_ch1)].
|
||||
|
||||
*Serverless*, or *function-as-a-service* (FaaS), is another approach to deploying services, in which the management of the infrastructure is outsourced to a cloud vendor [[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Jonas2019)]. When using virtual machines, you have to explicitly choose when to start up or shut down an instance; in contrast, with the serverless model, the cloud provider automatically allocates and frees hardware resources as needed, based on the incoming requests to your service [[47](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shahrad2020)]. The term “serverless” can misleading: each serverless function execution still runs on a server, but subsequent executions might run on a different one.
|
||||
*Serverless*, or *function-as-a-service* (FaaS), is another approach to deploying services, in which the management of the infrastructure is outsourced to a cloud vendor [[29](ch01.html#Jonas2019)]. When using virtual machines, you have to explicitly choose when to start up or shut down an instance; in contrast, with the serverless model, the cloud provider automatically allocates and frees hardware resources as needed, based on the incoming requests to your service [[47](ch01.html#Shahrad2020)]. The term “serverless” can misleading: each serverless function execution still runs on a server, but subsequent executions might run on a different one.
|
||||
|
||||
Just like cloud storage replaced capacity planning (deciding in advance how many disks to buy) with a metered billing model, the serverless approach is bringing metered billing to code execution: you only pay for the time that your application code is actually running, rather than having to provision resources in advance.
|
||||
|
||||
@ -660,24 +655,24 @@ Just like cloud storage replaced capacity planning (deciding in advance how many
|
||||
雲計算並非構建大規模計算系統的唯一方式;另一種選擇是*高效能計算*(HPC),也稱為*超級計算*。雖然有一些重疊,但HPC通常有不同的優先順序並採用與雲計算和企業資料中心繫統不同的技術。其中一些差異包括:
|
||||
|
||||
- 超級計算機通常用於計算密集型的科學計算任務,如天氣預報、分子動力學(模擬原子和分子的運動)、複雜的最佳化問題和求解偏微分方程。另一方面,雲計算傾向於用於線上服務、商業資料系統和需要高可用性服務使用者請求的類似系統。
|
||||
- 超級計算機通常執行大型批處理作業,這些作業會不時地將計算狀態檢查點儲存到磁碟。如果節點失敗,一個常見的解決方案是簡單地停止整個叢集工作,修復故障節點,然後從最後一個檢查點重新開始計算 [[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Barroso2018), [49](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fiala2012)]。在雲服務中,通常不希望停止整個叢集,因為服務需要持續地以最小的中斷服務於使用者。
|
||||
- 超級計算機通常由專用硬體構建,每個節點都相當可靠。雲服務中的節點通常由商品機構建,這些商品機由於規模經濟可以以較低成本提供等效效能,但也具有更高的故障率(見[“硬體和軟體故障”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_hardware_faults))。
|
||||
- 超級計算機節點通常透過共享記憶體和遠端直接記憶體訪問(RDMA)進行通訊,這支援高頻寬和低延遲,但假設系統使用者之間有高度的信任 [[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#KornfeldSimpson2020)]。在雲計算中,網路和機器經常由互不信任的組織共享,需要更強的安全機制,如資源隔離(例如,虛擬機器)、加密和認證。
|
||||
- 雲資料中心網路通常基於IP和乙太網,按Clos拓撲排列,以提供高切面頻寬——這是衡量網路整體效能的常用指標 [[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Barroso2018), [51](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Singh2015)]。超級計算機通常使用專用的網路拓撲,如多維網格和環面 [[52](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Lockwood2014)],這為具有已知通訊模式的HPC工作負載提供了更好的效能。
|
||||
- 超級計算機通常執行大型批處理作業,這些作業會不時地將計算狀態檢查點儲存到磁碟。如果節點失敗,一個常見的解決方案是簡單地停止整個叢集工作,修復故障節點,然後從最後一個檢查點重新開始計算 [[48](ch01.html#Barroso2018), [49](ch01.html#Fiala2012)]。在雲服務中,通常不希望停止整個叢集,因為服務需要持續地以最小的中斷服務於使用者。
|
||||
- 超級計算機通常由專用硬體構建,每個節點都相當可靠。雲服務中的節點通常由商品機構建,這些商品機由於規模經濟可以以較低成本提供等效效能,但也具有更高的故障率(見[“硬體和軟體故障”](ch02.html#sec_introduction_hardware_faults))。
|
||||
- 超級計算機節點通常透過共享記憶體和遠端直接記憶體訪問(RDMA)進行通訊,這支援高頻寬和低延遲,但假設系統使用者之間有高度的信任 [[50](ch01.html#KornfeldSimpson2020)]。在雲計算中,網路和機器經常由互不信任的組織共享,需要更強的安全機制,如資源隔離(例如,虛擬機器)、加密和認證。
|
||||
- 雲資料中心網路通常基於IP和乙太網,按Clos拓撲排列,以提供高切面頻寬——這是衡量網路整體效能的常用指標 [[48](ch01.html#Barroso2018), [51](ch01.html#Singh2015)]。超級計算機通常使用專用的網路拓撲,如多維網格和環面 [[52](ch01.html#Lockwood2014)],這為具有已知通訊模式的HPC工作負載提供了更好的效能。
|
||||
- 雲計算允許節點分佈在多個地理位置,而超級計算機通常假設其所有節點都靠近在一起。
|
||||
|
||||
大規模分析系統有時與超級計算共享一些特徵,這就是為什麼如果您在這一領域工作,瞭解這些技術可能是值得的。然而,本書主要關注需要持續可用的服務,如[“可靠性和容錯”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability)中所討論的。
|
||||
大規模分析系統有時與超級計算共享一些特徵,這就是為什麼如果您在這一領域工作,瞭解這些技術可能是值得的。然而,本書主要關注需要持續可用的服務,如[“可靠性和容錯”](ch02.html#sec_introduction_reliability)中所討論的。
|
||||
|
||||
Cloud computing is not the only way of building large-scale computing systems; an alternative is *high-performance computing* (HPC), also known as *supercomputing*. Although there are overlaps, HPC often has different priorities and uses different techniques compared to cloud computing and enterprise datacenter systems. Some of those differences are:
|
||||
|
||||
- Supercomputers are typically used for computationally intensive scientific computing tasks, such as weather forecasting, molecular dynamics (simulating the movement of atoms and molecules), complex optimization problems, and solving partial differential equations. On the other hand, cloud computing tends to be used for online services, business data systems, and similar systems that need to serve user requests with high availability.
|
||||
- A supercomputer typically runs large batch jobs that checkpoint the state of their computation to disk from time to time. If a node fails, a common solution is to simply stop the entire cluster workload, repair the faulty node, and then restart the computation from the last checkpoint [[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Barroso2018), [49](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fiala2012)]. With cloud services, it is usually not desirable to stop the entire cluster, since the services need to continually serve users with minimal interruptions.
|
||||
- Supercomputers are typically built from specialized hardware, where each node is quite reliable. Nodes in cloud services are usually built from commodity machines, which can provide equivalent performance at lower cost due to economies of scale, but which also have higher failure rates (see [“Hardware and Software Faults”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_hardware_faults)).
|
||||
- Supercomputer nodes typically communicate through shared memory and remote direct memory access (RDMA), which support high bandwidth and low latency, but assume a high level of trust among the users of the system [[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#KornfeldSimpson2020)]. In cloud computing, the network and the machines are often shared by mutually untrusting organizations, requiring stronger security mechanisms such as resource isolation (e.g., virtual machines), encryption and authentication.
|
||||
- Cloud datacenter networks are often based on IP and Ethernet, arranged in Clos topologies to provide high bisection bandwidth—a commonly used measure of a network’s overall performance [[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Barroso2018), [51](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Singh2015)]. Supercomputers often use specialized network topologies, such as multi-dimensional meshes and toruses [[52](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Lockwood2014)], which yield better performance for HPC workloads with known communication patterns.
|
||||
- A supercomputer typically runs large batch jobs that checkpoint the state of their computation to disk from time to time. If a node fails, a common solution is to simply stop the entire cluster workload, repair the faulty node, and then restart the computation from the last checkpoint [[48](ch01.html#Barroso2018), [49](ch01.html#Fiala2012)]. With cloud services, it is usually not desirable to stop the entire cluster, since the services need to continually serve users with minimal interruptions.
|
||||
- Supercomputers are typically built from specialized hardware, where each node is quite reliable. Nodes in cloud services are usually built from commodity machines, which can provide equivalent performance at lower cost due to economies of scale, but which also have higher failure rates (see [“Hardware and Software Faults”](ch02.html#sec_introduction_hardware_faults)).
|
||||
- Supercomputer nodes typically communicate through shared memory and remote direct memory access (RDMA), which support high bandwidth and low latency, but assume a high level of trust among the users of the system [[50](ch01.html#KornfeldSimpson2020)]. In cloud computing, the network and the machines are often shared by mutually untrusting organizations, requiring stronger security mechanisms such as resource isolation (e.g., virtual machines), encryption and authentication.
|
||||
- Cloud datacenter networks are often based on IP and Ethernet, arranged in Clos topologies to provide high bisection bandwidth—a commonly used measure of a network’s overall performance [[48](ch01.html#Barroso2018), [51](ch01.html#Singh2015)]. Supercomputers often use specialized network topologies, such as multi-dimensional meshes and toruses [[52](ch01.html#Lockwood2014)], which yield better performance for HPC workloads with known communication patterns.
|
||||
- Cloud computing allows nodes to be distributed across multiple geographic locations, whereas supercomputers generally assume that all of their nodes are close together.
|
||||
|
||||
Large-scale analytics systems sometimes share some characteristics with supercomputing, which is why it can be worth knowing about these techniques if you are working in this area. However, this book is mostly concerned with services that need to be continually available, as discussed in [“Reliability and Fault Tolerance”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_reliability).
|
||||
Large-scale analytics systems sometimes share some characteristics with supercomputing, which is why it can be worth knowing about these techniques if you are working in this area. However, this book is mostly concerned with services that need to be continually available, as discussed in [“Reliability and Fault Tolerance”](ch02.html#sec_introduction_reliability).
|
||||
|
||||
|
||||
|
||||
@ -690,11 +685,11 @@ Large-scale analytics systems sometimes share some characteristics with supercom
|
||||
|
||||
特別需要關注的是儲存關於人們及其行為的資料的系統。自2018年以來,*通用資料保護條例*(GDPR)為許多歐洲國家的居民提供了更大的控制權和法律權利,用以管理他們的個人資料,類似的隱私法規也在世界各地的不同國家和地區得到採納,例如加利福尼亞消費者隱私法案(CCPA)。圍繞人工智慧的法規,如*歐盟人工智慧法案*,對個人資料的使用施加了進一步的限制。
|
||||
|
||||
此外,即使在不直接受法規約束的領域,也越來越多地認識到計算機系統對人和社會的影響。社交媒體改變了個人獲取新聞的方式,這影響了他們的政治觀點,從而可能影響選舉結果。自動化系統越來越多地做出對個人有深遠影響的決定,例如決定誰應獲得貸款或保險,誰應被邀請參加工作面試,或者誰應被懷疑犯有罪行 [[53](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ONeil2016_ch1)]。
|
||||
此外,即使在不直接受法規約束的領域,也越來越多地認識到計算機系統對人和社會的影響。社交媒體改變了個人獲取新聞的方式,這影響了他們的政治觀點,從而可能影響選舉結果。自動化系統越來越多地做出對個人有深遠影響的決定,例如決定誰應獲得貸款或保險,誰應被邀請參加工作面試,或者誰應被懷疑犯有罪行 [[53](ch01.html#ONeil2016_ch1)]。
|
||||
|
||||
從事這些系統的每個人都負有考慮其倫理影響並確保遵守相關法律的責任。並不是每個人都必須成為法律和倫理的專家,但基本的法律和倫理原則意識與分散式系統的一些基礎知識同樣重要。
|
||||
|
||||
法律考量正在影響資料系統設計的基礎 [[54](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shastri2020)]。例如,GDPR授予個人在請求時刪除其資料的權利(有時稱為*被遺忘權*)。然而,正如我們在本書中將看到的,許多資料系統依賴於不可變構造,如作為設計一部分的僅追加日誌;我們如何確保在一個本應不可變的檔案中刪除某些資料?我們如何處理已併入派生資料集的資料的刪除問題(見[“記錄系統與派生資料”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_derived)),如機器學習模型的訓練資料?回答這些問題創造了新的工程挑戰。
|
||||
法律考量正在影響資料系統設計的基礎 [[54](ch01.html#Shastri2020)]。例如,GDPR授予個人在請求時刪除其資料的權利(有時稱為*被遺忘權*)。然而,正如我們在本書中將看到的,許多資料系統依賴於不可變構造,如作為設計一部分的僅追加日誌;我們如何確保在一個本應不可變的檔案中刪除某些資料?我們如何處理已併入派生資料集的資料的刪除問題(見[“記錄系統與派生資料”](ch01.html#sec_introduction_derived)),如機器學習模型的訓練資料?回答這些問題創造了新的工程挑戰。
|
||||
|
||||
目前我們沒有明確的指南來判斷哪些特定技術或系統架構應被視為“符合GDPR”的。法規故意沒有規定特定的技術,因為這些可能隨著技術的進步而迅速變化。相反,法律文字提出了需要解釋的高階原則。這意味著關於如何遵守隱私法規的問題沒有簡單的答案,但我們將透過這個視角審視本書中的一些技術。
|
||||
|
||||
@ -702,7 +697,7 @@ Large-scale analytics systems sometimes share some characteristics with supercom
|
||||
|
||||
政府或警察部門也可能強制公司交出資料。當存在資料可能揭示被刑事化行為的風險時(例如,在幾個中東和非洲國家的同性戀行為,或在幾個美國州尋求墮胎),儲存該資料為使用者創造了真正的安全風險。例如,透過位置資料很容易揭露到墮胎診所的旅行,甚至可能透過一段時間內使用者 IP 地址的日誌(表明大致位置)揭露。
|
||||
|
||||
一旦考慮到所有風險,可能會合理地決定某些資料根本不值得儲存,因此應該將其刪除。*資料最小化*原則(有時稱為德語術語*Datensparsamkeit*)與儲存大量資料的“大資料”哲學相悖,以防它在未來證明有用 [[55](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Datensparsamkeit)]。但這與 GDPR 相符,後者規定只能為特定的、明確的目的收集個人資料,這些資料以後不能用於任何其他目的,且為了收集目的,儲存的資料不得超過必要的時間 [[56](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#GDPR)]。
|
||||
一旦考慮到所有風險,可能會合理地決定某些資料根本不值得儲存,因此應該將其刪除。*資料最小化*原則(有時稱為德語術語*Datensparsamkeit*)與儲存大量資料的“大資料”哲學相悖,以防它在未來證明有用 [[55](ch01.html#Datensparsamkeit)]。但這與 GDPR 相符,後者規定只能為特定的、明確的目的收集個人資料,這些資料以後不能用於任何其他目的,且為了收集目的,儲存的資料不得超過必要的時間 [[56](ch01.html#GDPR)]。
|
||||
|
||||
企業也注意到了隱私和安全問題。信用卡公司要求支付處理業務遵守嚴格的支付卡行業(PCI)標準。處理者經常接受獨立審計師的評估,以驗證持續合規。軟體供應商也看到了增加的審查。現在許多買家要求其供應商符合服務組織控制(SOC)型別 2 標準。與 PCI 合規一樣,供應商接受第三方審計以驗證遵守情況。
|
||||
|
||||
@ -713,11 +708,11 @@ So far you’ve seen in this chapter that the architecture of data systems is in
|
||||
|
||||
One particular concern are systems that store data about people and their behavior. Since 2018 the *General Data Protection Regulation* (GDPR) has given residents of many European countries greater control and legal rights over their personal data, and similar privacy regulation has been adopted in various other countries and states around the world, including for example the California Consumer Privacy Act (CCPA). Regulations around AI, such as the *EU AI Act*, place further restrictions on how personal data can be used.
|
||||
|
||||
Moreover, even in areas that are not directly subject to regulation, there is increasing recognition of the effects that computer systems have on people and society. Social media has changed how individuals consume news, which influences their political opinions and hence may affect the outcome of elections. Automated systems increasingly make decisions that have profound consequences for individuals, such as deciding who should be given a loan or insurance coverage, who should be invited to a job interview, or who should be suspected of a crime [[53](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ONeil2016_ch1)].
|
||||
Moreover, even in areas that are not directly subject to regulation, there is increasing recognition of the effects that computer systems have on people and society. Social media has changed how individuals consume news, which influences their political opinions and hence may affect the outcome of elections. Automated systems increasingly make decisions that have profound consequences for individuals, such as deciding who should be given a loan or insurance coverage, who should be invited to a job interview, or who should be suspected of a crime [[53](ch01.html#ONeil2016_ch1)].
|
||||
|
||||
Everyone who works on such systems shares a responsibility for considering the ethical impact and ensuring that they comply with relevant law. It is not necessary for everybody to become an expert in law and ethics, but a basic awareness of legal and ethical principles is just as important as, say, some foundational knowledge in distributed systems.
|
||||
|
||||
Legal considerations are influencing the very foundations of how data systems are being designed [[54](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shastri2020)]. For example, the GDPR grants individuals the right to have their data erased on request (sometimes known as the *right to be forgotten*). However, as we shall see in this book, many data systems rely on immutable constructs such as append-only logs as part of their design; how can we ensure deletion of some data in the middle of a file that is supposed to be immutable? How do we handle deletion of data that has been incorporated into derived datasets (see [“Systems of Record and Derived Data”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_derived)), such as training data for machine learning models? Answering these questions creates new engineering challenges.
|
||||
Legal considerations are influencing the very foundations of how data systems are being designed [[54](ch01.html#Shastri2020)]. For example, the GDPR grants individuals the right to have their data erased on request (sometimes known as the *right to be forgotten*). However, as we shall see in this book, many data systems rely on immutable constructs such as append-only logs as part of their design; how can we ensure deletion of some data in the middle of a file that is supposed to be immutable? How do we handle deletion of data that has been incorporated into derived datasets (see [“Systems of Record and Derived Data”](ch01.html#sec_introduction_derived)), such as training data for machine learning models? Answering these questions creates new engineering challenges.
|
||||
|
||||
At present we don’t have clear guidelines on which particular technologies or system architectures should be considered “GDPR-compliant” or not. The regulation deliberately does not mandate particular technologies, because these may quickly change as technology progresses. Instead, the legal texts set out high-level principles that are subject to interpretation. This means that there are no simple answers to the question of how to comply with privacy regulation, but we will look at some of the technologies in this book through this lens.
|
||||
|
||||
@ -725,7 +720,7 @@ In general, we store data because we think that its value is greater than the co
|
||||
|
||||
Governments or police forces might also compel companies to hand over data. When there is a risk that the data may reveal criminalized behaviors (for example, homosexuality in several Middle Eastern and African countries, or seeking an abortion in several US states), storing that data creates real safety risks for users. Travel to an abortion clinic, for example, could easily be revealed by location data, perhaps even by a log of the user’s IP addresses over time (which indicate approximate location).
|
||||
|
||||
Once all the risks are taken into account, it might be reasonable to decide that some data is simply not worth storing, and that it should therefore be deleted. This principle of *data minimization* (sometimes known by the German term *Datensparsamkeit*) runs counter to the “big data” philosophy of storing lots of data speculatively in case it turns out to be useful in the future [[55](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Datensparsamkeit)]. But it fits with the GDPR, which mandates that personal data many only be collected for a specified, explicit purpose, that this data may not later be used for any other purpose, and that the data must not be kept for longer than necessary for the purposes for which it was collected [[56](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#GDPR)].
|
||||
Once all the risks are taken into account, it might be reasonable to decide that some data is simply not worth storing, and that it should therefore be deleted. This principle of *data minimization* (sometimes known by the German term *Datensparsamkeit*) runs counter to the “big data” philosophy of storing lots of data speculatively in case it turns out to be useful in the future [[55](ch01.html#Datensparsamkeit)]. But it fits with the GDPR, which mandates that personal data many only be collected for a specified, explicit purpose, that this data may not later be used for any other purpose, and that the data must not be kept for longer than necessary for the purposes for which it was collected [[56](ch01.html#GDPR)].
|
||||
|
||||
Businesses have also taken notice of privacy and safety concerns. Credit card companies require payment processing businesses to adhere to strict payment card industry (PCI) standards. Processors undergo frequent evaluations from independent auditors to verify continued compliance. Software vendors have also seen increased scrutiny. Many buyers now require their vendors to comply with Service Organization Control (SOC) Type 2 standards. As with PCI compliance, vendors undergo third party audits to verify adherence.
|
||||
|
||||
@ -762,114 +757,114 @@ Finally, we saw that data systems architecture is determined not only by the nee
|
||||
|
||||
## 參考文獻
|
||||
|
||||
[[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kouzes2009-marker)] Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26)
|
||||
[[1](ch01.html#Kouzes2009-marker)] Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26)
|
||||
|
||||
[[2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kleppmann2019-marker)] Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737)
|
||||
[[2](ch01.html#Kleppmann2019-marker)] Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737)
|
||||
|
||||
[[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Reis2022-marker)] Joe Reis and Matt Housley. [*Fundamentals of Data Engineering*](https://www.oreilly.com/library/view/fundamentals-of-data/9781098108298/). O’Reilly Media, 2022. ISBN: 9781098108304
|
||||
[[3](ch01.html#Reis2022-marker)] Joe Reis and Matt Housley. [*Fundamentals of Data Engineering*](https://www.oreilly.com/library/view/fundamentals-of-data/9781098108298/). O’Reilly Media, 2022. ISBN: 9781098108304
|
||||
|
||||
[[4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Machado2023-marker)] Rui Pedro Machado and Helder Russa. [*Analytics Engineering with SQL and dbt*](https://www.oreilly.com/library/view/analytics-engineering-with/9781098142377/). O’Reilly Media, 2023. ISBN: 9781098142384
|
||||
[[4](ch01.html#Machado2023-marker)] Rui Pedro Machado and Helder Russa. [*Analytics Engineering with SQL and dbt*](https://www.oreilly.com/library/view/analytics-engineering-with/9781098142377/). O’Reilly Media, 2023. ISBN: 9781098142384
|
||||
|
||||
[[5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Codd1993-marker)] Edgar F. Codd, S. B. Codd, and C. T. Salley. [Providing OLAP to User-Analysts: An IT Mandate](http://www.estgv.ipv.pt/PaginasPessoais/jloureiro/ESI_AID2007_2008/fichas/codd.pdf). E. F. Codd Associates, 1993. Archived at [perma.cc/RKX8-2GEE](https://perma.cc/RKX8-2GEE)
|
||||
[[5](ch01.html#Codd1993-marker)] Edgar F. Codd, S. B. Codd, and C. T. Salley. [Providing OLAP to User-Analysts: An IT Mandate](http://www.estgv.ipv.pt/PaginasPessoais/jloureiro/ESI_AID2007_2008/fichas/codd.pdf). E. F. Codd Associates, 1993. Archived at [perma.cc/RKX8-2GEE](https://perma.cc/RKX8-2GEE)
|
||||
|
||||
[[6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Chaudhuri1997-marker)] Surajit Chaudhuri and Umeshwar Dayal. [An Overview of Data Warehousing and OLAP Technology](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/sigrecord.pdf). *ACM SIGMOD Record*, volume 26, issue 1, pages 65–74, March 1997. [doi:10.1145/248603.248616](https://doi.org/10.1145/248603.248616)
|
||||
[[6](ch01.html#Chaudhuri1997-marker)] Surajit Chaudhuri and Umeshwar Dayal. [An Overview of Data Warehousing and OLAP Technology](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/sigrecord.pdf). *ACM SIGMOD Record*, volume 26, issue 1, pages 65–74, March 1997. [doi:10.1145/248603.248616](https://doi.org/10.1145/248603.248616)
|
||||
|
||||
[[7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Ozcan2017-marker)] Fatma Özcan, Yuanyuan Tian, and Pinar Tözün. [Hybrid Transactional/Analytical Processing: A Survey](https://humming80.github.io/papers/sigmod-htaptut.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3054784](https://doi.org/10.1145/3035918.3054784)
|
||||
[[7](ch01.html#Ozcan2017-marker)] Fatma Özcan, Yuanyuan Tian, and Pinar Tözün. [Hybrid Transactional/Analytical Processing: A Survey](https://humming80.github.io/papers/sigmod-htaptut.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3054784](https://doi.org/10.1145/3035918.3054784)
|
||||
|
||||
[[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Prout2022-marker)] Adam Prout, Szu-Po Wang, Joseph Victor, Zhou Sun, Yongzhu Li, Jack Chen, Evan Bergeron, Eric Hanson, Robert Walzer, Rodrigo Gomes, and Nikita Shamgunov. [Cloud-Native Transactions and Analytics in SingleStore](https://dl.acm.org/doi/abs/10.1145/3514221.3526055). At *International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526055](https://doi.org/10.1145/3514221.3526055)
|
||||
[[8](ch01.html#Prout2022-marker)] Adam Prout, Szu-Po Wang, Joseph Victor, Zhou Sun, Yongzhu Li, Jack Chen, Evan Bergeron, Eric Hanson, Robert Walzer, Rodrigo Gomes, and Nikita Shamgunov. [Cloud-Native Transactions and Analytics in SingleStore](https://dl.acm.org/doi/abs/10.1145/3514221.3526055). At *International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526055](https://doi.org/10.1145/3514221.3526055)
|
||||
|
||||
[[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stonebraker2005fitsall-marker)] Michael Stonebraker and Uğur Çetintemel. [‘One Size Fits All’: An Idea Whose Time Has Come and Gone](https://pages.cs.wisc.edu/~shivaram/cs744-readings/fits_all.pdf). At *21st International Conference on Data Engineering* (ICDE), April 2005. [doi:10.1109/ICDE.2005.1](https://doi.org/10.1109/ICDE.2005.1)
|
||||
[[9](ch01.html#Stonebraker2005fitsall-marker)] Michael Stonebraker and Uğur Çetintemel. [‘One Size Fits All’: An Idea Whose Time Has Come and Gone](https://pages.cs.wisc.edu/~shivaram/cs744-readings/fits_all.pdf). At *21st International Conference on Data Engineering* (ICDE), April 2005. [doi:10.1109/ICDE.2005.1](https://doi.org/10.1109/ICDE.2005.1)
|
||||
|
||||
[[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cohen2009-marker)] Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph M Hellerstein, and Caleb Welton. [MAD Skills: New Analysis Practices for Big Data](http://www.vldb.org/pvldb/vol2/vldb09-219.pdf). *Proceedings of the VLDB Endowment*, volume 2, issue 2, pages 1481–1492, August 2009. [doi:10.14778/1687553.1687576](https://doi.org/10.14778/1687553.1687576)
|
||||
[[10](ch01.html#Cohen2009-marker)] Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph M Hellerstein, and Caleb Welton. [MAD Skills: New Analysis Practices for Big Data](http://www.vldb.org/pvldb/vol2/vldb09-219.pdf). *Proceedings of the VLDB Endowment*, volume 2, issue 2, pages 1481–1492, August 2009. [doi:10.14778/1687553.1687576](https://doi.org/10.14778/1687553.1687576)
|
||||
|
||||
[[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Olteanu2020-marker)] Dan Olteanu. [The Relational Data Borg is Learning](http://www.vldb.org/pvldb/vol13/p3502-olteanu.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, August 2020. [doi:10.14778/3415478.3415572](https://doi.org/10.14778/3415478.3415572)
|
||||
[[11](ch01.html#Olteanu2020-marker)] Dan Olteanu. [The Relational Data Borg is Learning](http://www.vldb.org/pvldb/vol13/p3502-olteanu.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, August 2020. [doi:10.14778/3415478.3415572](https://doi.org/10.14778/3415478.3415572)
|
||||
|
||||
[[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bornstein2020-marker)] Matt Bornstein, Martin Casado, and Jennifer Li. [Emerging Architectures for Modern Data Infrastructure: 2020](https://future.a16z.com/emerging-architectures-for-modern-data-infrastructure-2020/). *future.a16z.com*, October 2020. Archived at [perma.cc/LF8W-KDCC](https://perma.cc/LF8W-KDCC)
|
||||
[[12](ch01.html#Bornstein2020-marker)] Matt Bornstein, Martin Casado, and Jennifer Li. [Emerging Architectures for Modern Data Infrastructure: 2020](https://future.a16z.com/emerging-architectures-for-modern-data-infrastructure-2020/). *future.a16z.com*, October 2020. Archived at [perma.cc/LF8W-KDCC](https://perma.cc/LF8W-KDCC)
|
||||
|
||||
[[13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fowler2015-marker)] Martin Fowler. [DataLake](https://www.martinfowler.com/bliki/DataLake.html). *martinfowler.com*, February 2015. Archived at [perma.cc/4WKN-CZUK](https://perma.cc/4WKN-CZUK)
|
||||
[[13](ch01.html#Fowler2015-marker)] Martin Fowler. [DataLake](https://www.martinfowler.com/bliki/DataLake.html). *martinfowler.com*, February 2015. Archived at [perma.cc/4WKN-CZUK](https://perma.cc/4WKN-CZUK)
|
||||
|
||||
[[14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Johnson2015-marker)] Bobby Johnson and Joseph Adler. [The Sushi Principle: Raw Data Is Better](https://learning.oreilly.com/videos/strata-hadoop/9781491924143/9781491924143-video210840/). At *Strata+Hadoop World*, February 2015.
|
||||
[[14](ch01.html#Johnson2015-marker)] Bobby Johnson and Joseph Adler. [The Sushi Principle: Raw Data Is Better](https://learning.oreilly.com/videos/strata-hadoop/9781491924143/9781491924143-video210840/). At *Strata+Hadoop World*, February 2015.
|
||||
|
||||
[[15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Armbrust2021-marker)] Michael Armbrust, Ali Ghodsi, Reynold Xin, and Matei Zaharia. [Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics](https://www.cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf). At *11th Annual Conference on Innovative Data Systems Research* (CIDR), January 2021.
|
||||
[[15](ch01.html#Armbrust2021-marker)] Michael Armbrust, Ali Ghodsi, Reynold Xin, and Matei Zaharia. [Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics](https://www.cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf). At *11th Annual Conference on Innovative Data Systems Research* (CIDR), January 2021.
|
||||
|
||||
[[16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#DataOps-marker)] DataKitchen, Inc. [The DataOps Manifesto](https://dataopsmanifesto.org/en/). *dataopsmanifesto.org*, 2017. Archived at [perma.cc/3F5N-FUQ4](https://perma.cc/3F5N-FUQ4)
|
||||
[[16](ch01.html#DataOps-marker)] DataKitchen, Inc. [The DataOps Manifesto](https://dataopsmanifesto.org/en/). *dataopsmanifesto.org*, 2017. Archived at [perma.cc/3F5N-FUQ4](https://perma.cc/3F5N-FUQ4)
|
||||
|
||||
[[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Manohar2021-marker)] Tejas Manohar. [What is Reverse ETL: A Definition & Why It’s Taking Off](https://hightouch.io/blog/reverse-etl/). *hightouch.io*, November 2021. Archived at [perma.cc/A7TN-GLYJ](https://perma.cc/A7TN-GLYJ)
|
||||
[[17](ch01.html#Manohar2021-marker)] Tejas Manohar. [What is Reverse ETL: A Definition & Why It’s Taking Off](https://hightouch.io/blog/reverse-etl/). *hightouch.io*, November 2021. Archived at [perma.cc/A7TN-GLYJ](https://perma.cc/A7TN-GLYJ)
|
||||
|
||||
[[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ORegan2018-marker)] Simon O’Regan. [Designing Data Products](https://towardsdatascience.com/designing-data-products-b6b93edf3d23). *towardsdatascience.com*, August 2018. Archived at [perma.cc/HU67-3RV8](https://perma.cc/HU67-3RV8)
|
||||
[[18](ch01.html#ORegan2018-marker)] Simon O’Regan. [Designing Data Products](https://towardsdatascience.com/designing-data-products-b6b93edf3d23). *towardsdatascience.com*, August 2018. Archived at [perma.cc/HU67-3RV8](https://perma.cc/HU67-3RV8)
|
||||
|
||||
[[19](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fournier2021-marker)] Camille Fournier. [Why is it so hard to decide to buy?](https://skamille.medium.com/why-is-it-so-hard-to-decide-to-buy-d86fee98e88e) *skamille.medium.com*, July 2021. Archived at [perma.cc/6VSG-HQ5X](https://perma.cc/6VSG-HQ5X)
|
||||
[[19](ch01.html#Fournier2021-marker)] Camille Fournier. [Why is it so hard to decide to buy?](https://skamille.medium.com/why-is-it-so-hard-to-decide-to-buy-d86fee98e88e) *skamille.medium.com*, July 2021. Archived at [perma.cc/6VSG-HQ5X](https://perma.cc/6VSG-HQ5X)
|
||||
|
||||
[[20](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#HeinemeierHansson2022-marker)] David Heinemeier Hansson. [Why we’re leaving the cloud](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0). *world.hey.com*, October 2022. Archived at [perma.cc/82E6-UJ65](https://perma.cc/82E6-UJ65)
|
||||
[[20](ch01.html#HeinemeierHansson2022-marker)] David Heinemeier Hansson. [Why we’re leaving the cloud](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0). *world.hey.com*, October 2022. Archived at [perma.cc/82E6-UJ65](https://perma.cc/82E6-UJ65)
|
||||
|
||||
[[21](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Badizadegan2022-marker)] Nima Badizadegan. [Use One Big Server](https://specbranch.com/posts/one-big-server/). *specbranch.com*, August 2022. Archived at [perma.cc/M8NB-95UK](https://perma.cc/M8NB-95UK)
|
||||
[[21](ch01.html#Badizadegan2022-marker)] Nima Badizadegan. [Use One Big Server](https://specbranch.com/posts/one-big-server/). *specbranch.com*, August 2022. Archived at [perma.cc/M8NB-95UK](https://perma.cc/M8NB-95UK)
|
||||
|
||||
[[22](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Yegge2020-marker)] Steve Yegge. [Dear Google Cloud: Your Deprecation Policy is Killing You](https://steve-yegge.medium.com/dear-google-cloud-your-deprecation-policy-is-killing-you-ee7525dc05dc). *steve-yegge.medium.com*, August 2020. Archived at [perma.cc/KQP9-SPGU](https://perma.cc/KQP9-SPGU)
|
||||
[[22](ch01.html#Yegge2020-marker)] Steve Yegge. [Dear Google Cloud: Your Deprecation Policy is Killing You](https://steve-yegge.medium.com/dear-google-cloud-your-deprecation-policy-is-killing-you-ee7525dc05dc). *steve-yegge.medium.com*, August 2020. Archived at [perma.cc/KQP9-SPGU](https://perma.cc/KQP9-SPGU)
|
||||
|
||||
[[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Verbitski2017-marker)] Alexandre Verbitski, Anurag Gupta, Debanjan Saha, Murali Brahmadesam, Kamal Gupta, Raman Mittal, Sailesh Krishnamurthy, Sandor Maurice, Tengiz Kharatishvili, and Xiaofeng Bao. [Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases](https://media.amazonwebservices.com/blog/2017/aurora-design-considerations-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1041–1052, May 2017. [doi:10.1145/3035918.3056101](https://doi.org/10.1145/3035918.3056101)
|
||||
[[23](ch01.html#Verbitski2017-marker)] Alexandre Verbitski, Anurag Gupta, Debanjan Saha, Murali Brahmadesam, Kamal Gupta, Raman Mittal, Sailesh Krishnamurthy, Sandor Maurice, Tengiz Kharatishvili, and Xiaofeng Bao. [Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases](https://media.amazonwebservices.com/blog/2017/aurora-design-considerations-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1041–1052, May 2017. [doi:10.1145/3035918.3056101](https://doi.org/10.1145/3035918.3056101)
|
||||
|
||||
[[24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Antonopoulos2019_ch1-marker)] Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047)
|
||||
[[24](ch01.html#Antonopoulos2019_ch1-marker)] Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047)
|
||||
|
||||
[[25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vuppalapati2020-marker)] Midhul Vuppalapati, Justin Miron, Rachit Agarwal, Dan Truong, Ashish Motivala, and Thierry Cruanes. [Building An Elastic Query Engine on Disaggregated Storage](https://www.usenix.org/system/files/nsdi20-paper-vuppalapati.pdf). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020.
|
||||
[[25](ch01.html#Vuppalapati2020-marker)] Midhul Vuppalapati, Justin Miron, Rachit Agarwal, Dan Truong, Ashish Motivala, and Thierry Cruanes. [Building An Elastic Query Engine on Disaggregated Storage](https://www.usenix.org/system/files/nsdi20-paper-vuppalapati.pdf). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020.
|
||||
|
||||
[[26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shapira2023-marker)] Gwen Shapira. [Compute-Storage Separation Explained](https://www.thenile.dev/blog/storage-compute). *thenile.dev*, January 2023. Archived at [perma.cc/QCV3-XJNZ](https://perma.cc/QCV3-XJNZ)
|
||||
[[26](ch01.html#Shapira2023-marker)] Gwen Shapira. [Compute-Storage Separation Explained](https://www.thenile.dev/blog/storage-compute). *thenile.dev*, January 2023. Archived at [perma.cc/QCV3-XJNZ](https://perma.cc/QCV3-XJNZ)
|
||||
|
||||
[[27](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Murthy2022-marker)] Ravi Murthy and Gurmeet Goindi. [AlloyDB for PostgreSQL under the hood: Intelligent, database-aware storage](https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage). *cloud.google.com*, May 2022. Archived at [archive.org](https://web.archive.org/web/20220514021120/https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage)
|
||||
[[27](ch01.html#Murthy2022-marker)] Ravi Murthy and Gurmeet Goindi. [AlloyDB for PostgreSQL under the hood: Intelligent, database-aware storage](https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage). *cloud.google.com*, May 2022. Archived at [archive.org](https://web.archive.org/web/20220514021120/https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage)
|
||||
|
||||
[[28](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Vanlightly2023-marker)] Jack Vanlightly. [The Architecture of Serverless Data Systems](https://jack-vanlightly.com/blog/2023/11/14/the-architecture-of-serverless-data-systems). *jack-vanlightly.com*, November 2023. Archived at [perma.cc/UDV4-TNJ5](https://perma.cc/UDV4-TNJ5)
|
||||
[[28](ch01.html#Vanlightly2023-marker)] Jack Vanlightly. [The Architecture of Serverless Data Systems](https://jack-vanlightly.com/blog/2023/11/14/the-architecture-of-serverless-data-systems). *jack-vanlightly.com*, November 2023. Archived at [perma.cc/UDV4-TNJ5](https://perma.cc/UDV4-TNJ5)
|
||||
|
||||
[[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Jonas2019-marker)] Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Neeraja Yadwadkar, Joseph E Gonzalez, Raluca Ada Popa, Ion Stoica, David A Patterson. [Cloud Programming Simplified: A Berkeley View on Serverless Computing](https://arxiv.org/abs/1902.03383). *arxiv.org*, February 2019.
|
||||
[[29](ch01.html#Jonas2019-marker)] Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Neeraja Yadwadkar, Joseph E Gonzalez, Raluca Ada Popa, Ion Stoica, David A Patterson. [Cloud Programming Simplified: A Berkeley View on Serverless Computing](https://arxiv.org/abs/1902.03383). *arxiv.org*, February 2019.
|
||||
|
||||
[[30](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Beyer2016-marker)] Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy. [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124
|
||||
[[30](ch01.html#Beyer2016-marker)] Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy. [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124
|
||||
|
||||
[[31](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Limoncelli2020-marker)] Thomas Limoncelli. [The Time I Stole $10,000 from Bell Labs](https://queue.acm.org/detail.cfm?id=3434773). *ACM Queue*, volume 18, issue 5, November 2020. [doi:10.1145/3434571.3434773](https://doi.org/10.1145/3434571.3434773)
|
||||
[[31](ch01.html#Limoncelli2020-marker)] Thomas Limoncelli. [The Time I Stole $10,000 from Bell Labs](https://queue.acm.org/detail.cfm?id=3434773). *ACM Queue*, volume 18, issue 5, November 2020. [doi:10.1145/3434571.3434773](https://doi.org/10.1145/3434571.3434773)
|
||||
|
||||
[[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Majors2020-marker)] Charity Majors. [The Future of Ops Jobs](https://acloudguru.com/blog/engineering/the-future-of-ops-jobs). *acloudguru.com*, August 2020. Archived at [perma.cc/GRU2-CZG3](https://perma.cc/GRU2-CZG3)
|
||||
[[32](ch01.html#Majors2020-marker)] Charity Majors. [The Future of Ops Jobs](https://acloudguru.com/blog/engineering/the-future-of-ops-jobs). *acloudguru.com*, August 2020. Archived at [perma.cc/GRU2-CZG3](https://perma.cc/GRU2-CZG3)
|
||||
|
||||
[[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Cherkasky2021-marker)] Boris Cherkasky. [(Over)Pay As You Go for Your Datastore](https://medium.com/riskified-technology/over-pay-as-you-go-for-your-datastore-11a29ae49a8b). *medium.com*, September 2021. Archived at [perma.cc/Q8TV-2AM2](https://perma.cc/Q8TV-2AM2)
|
||||
[[33](ch01.html#Cherkasky2021-marker)] Boris Cherkasky. [(Over)Pay As You Go for Your Datastore](https://medium.com/riskified-technology/over-pay-as-you-go-for-your-datastore-11a29ae49a8b). *medium.com*, September 2021. Archived at [perma.cc/Q8TV-2AM2](https://perma.cc/Q8TV-2AM2)
|
||||
|
||||
[[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Kushchi2023-marker)] Shlomi Kushchi. [Serverless Doesn’t Mean DevOpsLess or NoOps](https://thenewstack.io/serverless-doesnt-mean-devopsless-or-noops/). *thenewstack.io*, February 2023. Archived at [perma.cc/3NJR-AYYU](https://perma.cc/3NJR-AYYU)
|
||||
[[34](ch01.html#Kushchi2023-marker)] Shlomi Kushchi. [Serverless Doesn’t Mean DevOpsLess or NoOps](https://thenewstack.io/serverless-doesnt-mean-devopsless-or-noops/). *thenewstack.io*, February 2023. Archived at [perma.cc/3NJR-AYYU](https://perma.cc/3NJR-AYYU)
|
||||
|
||||
[[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Bernhardsson2021-marker)] Erik Bernhardsson. [Storm in the stratosphere: how the cloud will be reshuffled](https://erikbern.com/2021/11/30/storm-in-the-stratosphere-how-the-cloud-will-be-reshuffled.html). *erikbern.com*, November 2021. Archived at [perma.cc/SYB2-99P3](https://perma.cc/SYB2-99P3)
|
||||
[[35](ch01.html#Bernhardsson2021-marker)] Erik Bernhardsson. [Storm in the stratosphere: how the cloud will be reshuffled](https://erikbern.com/2021/11/30/storm-in-the-stratosphere-how-the-cloud-will-be-reshuffled.html). *erikbern.com*, November 2021. Archived at [perma.cc/SYB2-99P3](https://perma.cc/SYB2-99P3)
|
||||
|
||||
[[36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Stancil2021-marker)] Benn Stancil. [The data OS](https://benn.substack.com/p/the-data-os). *benn.substack.com*, September 2021. Archived at [perma.cc/WQ43-FHS6](https://perma.cc/WQ43-FHS6)
|
||||
[[36](ch01.html#Stancil2021-marker)] Benn Stancil. [The data OS](https://benn.substack.com/p/the-data-os). *benn.substack.com*, September 2021. Archived at [perma.cc/WQ43-FHS6](https://perma.cc/WQ43-FHS6)
|
||||
|
||||
[[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Korolov2022-marker)] Maria Korolov. [Data residency laws pushing companies toward residency as a service](https://www.csoonline.com/article/3647761/data-residency-laws-pushing-companies-toward-residency-as-a-service.html). *csoonline.com*, January 2022. Archived at [perma.cc/CHE4-XZZ2](https://perma.cc/CHE4-XZZ2)
|
||||
[[37](ch01.html#Korolov2022-marker)] Maria Korolov. [Data residency laws pushing companies toward residency as a service](https://www.csoonline.com/article/3647761/data-residency-laws-pushing-companies-toward-residency-as-a-service.html). *csoonline.com*, January 2022. Archived at [perma.cc/CHE4-XZZ2](https://perma.cc/CHE4-XZZ2)
|
||||
|
||||
[[38](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Nath2019-marker)] Kousik Nath. [These are the numbers every computer engineer should know](https://www.freecodecamp.org/news/must-know-numbers-for-every-computer-engineer/). *freecodecamp.org*, September 2019. Archived at [perma.cc/RW73-36RL](https://perma.cc/RW73-36RL)
|
||||
[[38](ch01.html#Nath2019-marker)] Kousik Nath. [These are the numbers every computer engineer should know](https://www.freecodecamp.org/news/must-know-numbers-for-every-computer-engineer/). *freecodecamp.org*, September 2019. Archived at [perma.cc/RW73-36RL](https://perma.cc/RW73-36RL)
|
||||
|
||||
[[39](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Hellerstein2019-marker)] Joseph M Hellerstein, Jose Faleiro, Joseph E Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov, and Chenggang Wu. [Serverless Computing: One Step Forward, Two Steps Back](https://arxiv.org/abs/1812.03651). At *Conference on Innovative Data Systems Research* (CIDR), January 2019.
|
||||
[[39](ch01.html#Hellerstein2019-marker)] Joseph M Hellerstein, Jose Faleiro, Joseph E Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov, and Chenggang Wu. [Serverless Computing: One Step Forward, Two Steps Back](https://arxiv.org/abs/1812.03651). At *Conference on Innovative Data Systems Research* (CIDR), January 2019.
|
||||
|
||||
[[40](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#McSherry2015_ch1-marker)] Frank McSherry, Michael Isard, and Derek G. Murray. [Scalability! But at What COST?](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-mcsherry.pdf) At *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015.
|
||||
[[40](ch01.html#McSherry2015_ch1-marker)] Frank McSherry, Michael Isard, and Derek G. Murray. [Scalability! But at What COST?](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-mcsherry.pdf) At *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015.
|
||||
|
||||
[[41](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Sridharan2018-marker)] Cindy Sridharan. *[Distributed Systems Observability: A Guide to Building Robust Systems](https://unlimited.humio.com/rs/756-LMY-106/images/Distributed-Systems-Observability-eBook.pdf)*. Report, O’Reilly Media, May 2018. Archived at [perma.cc/M6JL-XKCM](https://perma.cc/M6JL-XKCM)
|
||||
[[41](ch01.html#Sridharan2018-marker)] Cindy Sridharan. *[Distributed Systems Observability: A Guide to Building Robust Systems](https://unlimited.humio.com/rs/756-LMY-106/images/Distributed-Systems-Observability-eBook.pdf)*. Report, O’Reilly Media, May 2018. Archived at [perma.cc/M6JL-XKCM](https://perma.cc/M6JL-XKCM)
|
||||
|
||||
[[42](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Majors2019-marker)] Charity Majors. [Observability — A 3-Year Retrospective](https://thenewstack.io/observability-a-3-year-retrospective/). *thenewstack.io*, August 2019. Archived at [perma.cc/CG62-TJWL](https://perma.cc/CG62-TJWL)
|
||||
[[42](ch01.html#Majors2019-marker)] Charity Majors. [Observability — A 3-Year Retrospective](https://thenewstack.io/observability-a-3-year-retrospective/). *thenewstack.io*, August 2019. Archived at [perma.cc/CG62-TJWL](https://perma.cc/CG62-TJWL)
|
||||
|
||||
[[43](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Sigelman2010-marker)] Benjamin H. Sigelman, Luiz André Barroso, Mike Burrows, Pat Stephenson, Manoj Plakal, Donald Beaver, Saul Jaspan, and Chandan Shanbhag. [Dapper, a Large-Scale Distributed Systems Tracing Infrastructure](https://research.google/pubs/pub36356/). Google Technical Report dapper-2010-1, April 2010. Archived at [perma.cc/K7KU-2TMH](https://perma.cc/K7KU-2TMH)
|
||||
[[43](ch01.html#Sigelman2010-marker)] Benjamin H. Sigelman, Luiz André Barroso, Mike Burrows, Pat Stephenson, Manoj Plakal, Donald Beaver, Saul Jaspan, and Chandan Shanbhag. [Dapper, a Large-Scale Distributed Systems Tracing Infrastructure](https://research.google/pubs/pub36356/). Google Technical Report dapper-2010-1, April 2010. Archived at [perma.cc/K7KU-2TMH](https://perma.cc/K7KU-2TMH)
|
||||
|
||||
[[44](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Laigner2021-marker)] Rodrigo Laigner, Yongluan Zhou, Marcos Antonio Vaz Salles, Yijian Liu, and Marcos Kalinowski. [Data management in microservices: State of the practice, challenges, and research directions](http://www.vldb.org/pvldb/vol14/p3348-laigner.pdf). *Proceedings of the VLDB Endowment*, volume 14, issue 13, pages 3348–3361, September 2021. [doi:10.14778/3484224.3484232](https://doi.org/10.14778/3484224.3484232)
|
||||
[[44](ch01.html#Laigner2021-marker)] Rodrigo Laigner, Yongluan Zhou, Marcos Antonio Vaz Salles, Yijian Liu, and Marcos Kalinowski. [Data management in microservices: State of the practice, challenges, and research directions](http://www.vldb.org/pvldb/vol14/p3348-laigner.pdf). *Proceedings of the VLDB Endowment*, volume 14, issue 13, pages 3348–3361, September 2021. [doi:10.14778/3484224.3484232](https://doi.org/10.14778/3484224.3484232)
|
||||
|
||||
[[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Newman2021_ch1-marker)] Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025
|
||||
[[45](ch01.html#Newman2021_ch1-marker)] Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025
|
||||
|
||||
[[46](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Richardson2014-marker)] Chris Richardson. [Microservices: Decomposing Applications for Deployability and Scalability](http://www.infoq.com/articles/microservices-intro). *infoq.com*, May 2014. Archived at [perma.cc/CKN4-YEQ2](https://perma.cc/CKN4-YEQ2)
|
||||
[[46](ch01.html#Richardson2014-marker)] Chris Richardson. [Microservices: Decomposing Applications for Deployability and Scalability](http://www.infoq.com/articles/microservices-intro). *infoq.com*, May 2014. Archived at [perma.cc/CKN4-YEQ2](https://perma.cc/CKN4-YEQ2)
|
||||
|
||||
[[47](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shahrad2020-marker)] Mohammad Shahrad, Rodrigo Fonseca, Íñigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, Ricardo Bianchini. [Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider](https://www.usenix.org/system/files/atc20-shahrad.pdf). At *USENIX Annual Technical Conference* (ATC), July 2020.
|
||||
[[47](ch01.html#Shahrad2020-marker)] Mohammad Shahrad, Rodrigo Fonseca, Íñigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, Ricardo Bianchini. [Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider](https://www.usenix.org/system/files/atc20-shahrad.pdf). At *USENIX Annual Technical Conference* (ATC), July 2020.
|
||||
|
||||
[[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Barroso2018-marker)] Luiz André Barroso, Urs Hölzle, and Parthasarathy Ranganathan. [The Datacenter as a Computer: Designing Warehouse-Scale Machines](https://www.morganclaypool.com/doi/10.2200/S00874ED3V01Y201809CAC046), third edition. Morgan & Claypool Synthesis Lectures on Computer Architecture, October 2018. [doi:10.2200/S00874ED3V01Y201809CAC046](https://doi.org/10.2200/S00874ED3V01Y201809CAC046)
|
||||
[[48](ch01.html#Barroso2018-marker)] Luiz André Barroso, Urs Hölzle, and Parthasarathy Ranganathan. [The Datacenter as a Computer: Designing Warehouse-Scale Machines](https://www.morganclaypool.com/doi/10.2200/S00874ED3V01Y201809CAC046), third edition. Morgan & Claypool Synthesis Lectures on Computer Architecture, October 2018. [doi:10.2200/S00874ED3V01Y201809CAC046](https://doi.org/10.2200/S00874ED3V01Y201809CAC046)
|
||||
|
||||
[[49](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Fiala2012-marker)] David Fiala, Frank Mueller, Christian Engelmann, Rolf Riesen, Kurt Ferreira, and Ron Brightwell. [Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing](http://moss.csc.ncsu.edu/~mueller/ftp/pub/mueller/papers/sc12.pdf),” at *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2012. [doi:10.1109/SC.2012.49](https://doi.org/10.1109/SC.2012.49)
|
||||
[[49](ch01.html#Fiala2012-marker)] David Fiala, Frank Mueller, Christian Engelmann, Rolf Riesen, Kurt Ferreira, and Ron Brightwell. [Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing](http://moss.csc.ncsu.edu/~mueller/ftp/pub/mueller/papers/sc12.pdf),” at *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2012. [doi:10.1109/SC.2012.49](https://doi.org/10.1109/SC.2012.49)
|
||||
|
||||
[[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#KornfeldSimpson2020-marker)] Anna Kornfeld Simpson, Adriana Szekeres, Jacob Nelson, and Irene Zhang. [Securing RDMA for High-Performance Datacenter Storage Systems](https://www.usenix.org/conference/hotcloud20/presentation/kornfeld-simpson). At *12th USENIX Workshop on Hot Topics in Cloud Computing* (HotCloud), July 2020.
|
||||
[[50](ch01.html#KornfeldSimpson2020-marker)] Anna Kornfeld Simpson, Adriana Szekeres, Jacob Nelson, and Irene Zhang. [Securing RDMA for High-Performance Datacenter Storage Systems](https://www.usenix.org/conference/hotcloud20/presentation/kornfeld-simpson). At *12th USENIX Workshop on Hot Topics in Cloud Computing* (HotCloud), July 2020.
|
||||
|
||||
[[51](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Singh2015-marker)] Arjun Singh, Joon Ong, Amit Agarwal, Glen Anderson, Ashby Armistead, Roy Bannon, Seb Boving, Gaurav Desai, Bob Felderman, Paulie Germano, Anand Kanagala, Jeff Provost, Jason Simmons, Eiichi Tanda, Jim Wanderer, Urs Hölzle, Stephen Stuart, and Amin Vahdat. [Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network](http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2785956.2787508](https://doi.org/10.1145/2785956.2787508)
|
||||
[[51](ch01.html#Singh2015-marker)] Arjun Singh, Joon Ong, Amit Agarwal, Glen Anderson, Ashby Armistead, Roy Bannon, Seb Boving, Gaurav Desai, Bob Felderman, Paulie Germano, Anand Kanagala, Jeff Provost, Jason Simmons, Eiichi Tanda, Jim Wanderer, Urs Hölzle, Stephen Stuart, and Amin Vahdat. [Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network](http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2785956.2787508](https://doi.org/10.1145/2785956.2787508)
|
||||
|
||||
[[52](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Lockwood2014-marker)] Glenn K. Lockwood. [Hadoop’s Uncomfortable Fit in HPC](http://glennklockwood.blogspot.co.uk/2014/05/hadoops-uncomfortable-fit-in-hpc.html). *glennklockwood.blogspot.co.uk*, May 2014. Archived at [perma.cc/S8XX-Y67B](https://perma.cc/S8XX-Y67B)
|
||||
[[52](ch01.html#Lockwood2014-marker)] Glenn K. Lockwood. [Hadoop’s Uncomfortable Fit in HPC](http://glennklockwood.blogspot.co.uk/2014/05/hadoops-uncomfortable-fit-in-hpc.html). *glennklockwood.blogspot.co.uk*, May 2014. Archived at [perma.cc/S8XX-Y67B](https://perma.cc/S8XX-Y67B)
|
||||
|
||||
[[53](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#ONeil2016_ch1-marker)] Cathy O’Neil: *Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy*. Crown Publishing, 2016. ISBN: 9780553418811
|
||||
[[53](ch01.html#ONeil2016_ch1-marker)] Cathy O’Neil: *Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy*. Crown Publishing, 2016. ISBN: 9780553418811
|
||||
|
||||
[[54](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Shastri2020-marker)] Supreeth Shastri, Vinay Banakar, Melissa Wasserman, Arun Kumar, and Vijay Chidambaram. [Understanding and Benchmarking the Impact of GDPR on Database Systems](http://www.vldb.org/pvldb/vol13/p1064-shastri.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 7, pages 1064–1077, March 2020. [doi:10.14778/3384345.3384354](https://doi.org/10.14778/3384345.3384354)
|
||||
[[54](ch01.html#Shastri2020-marker)] Supreeth Shastri, Vinay Banakar, Melissa Wasserman, Arun Kumar, and Vijay Chidambaram. [Understanding and Benchmarking the Impact of GDPR on Database Systems](http://www.vldb.org/pvldb/vol13/p1064-shastri.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 7, pages 1064–1077, March 2020. [doi:10.14778/3384345.3384354](https://doi.org/10.14778/3384345.3384354)
|
||||
|
||||
[[55](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#Datensparsamkeit-marker)] Martin Fowler. [Datensparsamkeit](https://www.martinfowler.com/bliki/Datensparsamkeit.html). *martinfowler.com*, December 2013. Archived at [perma.cc/R9QX-CME6](https://perma.cc/R9QX-CME6)
|
||||
[[55](ch01.html#Datensparsamkeit-marker)] Martin Fowler. [Datensparsamkeit](https://www.martinfowler.com/bliki/Datensparsamkeit.html). *martinfowler.com*, December 2013. Archived at [perma.cc/R9QX-CME6](https://perma.cc/R9QX-CME6)
|
||||
|
||||
[[56](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#GDPR-marker)] [Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation)](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN). *Official Journal of the European Union* L 119/1, May 2016.
|
||||
[[56](ch01.html#GDPR-marker)] [Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation)](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN). *Official Journal of the European Union* L 119/1, May 2016.
|
647
zh-tw/ch2.md
647
zh-tw/ch2.md
File diff suppressed because it is too large
Load Diff
443
zh-tw/ch3.md
443
zh-tw/ch3.md
@ -8,7 +8,18 @@
|
||||
|
||||
-----------------------------
|
||||
|
||||
資料模型或許是開發軟體最重要的部分,因為它們具有深遠的影響:不僅影響軟體的編寫方式,還影響我們*思考問題*的方式。
|
||||
|
||||
大多數應用程式是透過在一個數據模型之上層疊另一個數據模型來構建的。對於每一層,關鍵問題是:它是如何以下一層的模型*表現*出來的?例如:
|
||||
|
||||
1. 作為應用開發者,你觀察現實世界(其中有人、組織、商品、行動、資金流動、感測器等),並以物件或資料結構以及操作這些資料結構的 API 的形式對其進行建模。這些結構通常是針對你的應用特定的。
|
||||
2. 當你想儲存這些資料結構時,你會用通用資料模型來表達它們,比如 JSON 或 XML 文件、關係資料庫中的表,或圖中的頂點和邊。這些資料模型是本章的主題。
|
||||
3. 構建你的資料庫軟體的工程師決定了一種將該 JSON/關係/圖資料表示為記憶體、磁碟或網路上的位元組的方式。這種表現可能允許資料被查詢、搜尋、操作和以各種方式處理。我們將在[後續連結]中討論這些儲存引擎設計。
|
||||
4. 在更低的層次上,硬體工程師已經找出瞭如何將位元組以電流、光脈衝、磁場等形式表示。
|
||||
|
||||
在一個複雜的應用中,可能還有更多的中間層次,如基於 API 的 API,但基本思想仍然相同:每一層透過提供一個清晰的資料模型來隱藏其下層的複雜性。這些抽象使得不同的人群——例如,資料庫供應商的工程師和使用他們資料庫的應用開發者——能夠有效地合作。
|
||||
|
||||
實踐中廣泛使用了幾種不同的資料模型,通常用於不同的目的。某些型別的資料和某些查詢在一個模型中易於表達,在另一個模型中則顯得笨拙。在本章中,我們將透過比較關係模型、文件模型、基於圖的資料模型、事件源和資料框來探討這些權衡。我們還將簡要檢視允許你使用這些模型的查詢語言。這種比較將幫助你決定何時使用哪種模型。
|
||||
|
||||
Data models are perhaps the most important part of developing software, because they have such a profound effect: not only on how the software is written, but also on how we *think about the problem* that we are solving.
|
||||
|
||||
@ -25,11 +36,17 @@ Several different data models are widely used in practice, often for different p
|
||||
|
||||
## 術語:宣告式查詢語言
|
||||
|
||||
本章中的許多查詢語言(如 SQL、Cypher、SPARQL 或 Datalog)都是*宣告式*的,這意味著你指定你想要的資料的模式——結果必須滿足的條件,以及你希望資料如何轉換(例如,排序、分組和聚合)——但不指定*如何*實現這一目標。資料庫系統的查詢最佳化器可以決定使用哪些索引和哪些連線演算法,以及以何種順序執行查詢的各個部分。
|
||||
|
||||
相比之下,使用大多數程式語言,你將必須編寫一個*演算法*——即告訴計算機按哪個順序執行哪些操作。宣告式查詢語言具有吸引力,因為它通常比顯式演算法更簡潔、更易於編寫。但更重要的是,它還隱藏了查詢引擎的實現細節,這使得資料庫系統可以引入效能改進而無需對查詢進行任何更改。[[1](ch03.html#Brandon2024)]。
|
||||
|
||||
例如,資料庫可能能夠在多個 CPU 核心和機器上並行執行宣告式查詢,而你無需擔心如何實現該並行性 [[2](ch03.html#Hellerstein2010)]。在手工編碼的演算法中,自行實現這種並行執行將是一項巨大的工作。
|
||||
|
||||
Many of the query languages in this chapter (such as SQL, Cypher, SPARQL, or Datalog) are *declarative*, which means that you specify the pattern of the data you want—what conditions the results must meet, and how you want the data to be transformed (e.g., sorted, grouped, and aggregated)—but not *how* to achieve that goal. The database system’s query optimizer can decide which indexes and which join algorithms to use, and in which order to execute various parts of the query.
|
||||
|
||||
In contrast, with most programming languages you would have to write an *algorithm*—i.e., telling the computer which operations to perform in which order. A declarative query language is attractive because it is typically more concise and easier to write than an explicit algorithm. But more importantly, it also hides implementation details of the query engine, which makes it possible for the database system to introduce performance improvements without requiring any changes to queries. [[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Brandon2024)].
|
||||
In contrast, with most programming languages you would have to write an *algorithm*—i.e., telling the computer which operations to perform in which order. A declarative query language is attractive because it is typically more concise and easier to write than an explicit algorithm. But more importantly, it also hides implementation details of the query engine, which makes it possible for the database system to introduce performance improvements without requiring any changes to queries. [[1](ch03.html#Brandon2024)].
|
||||
|
||||
For example, a database might be able to execute a declarative query in parallel across multiple CPU cores and machines, without you having to worry about how to implement that parallelism [[2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Hellerstein2010)]. In a hand-coded algorithm it would be a lot of work to implement such parallel execution yourself.
|
||||
For example, a database might be able to execute a declarative query in parallel across multiple CPU cores and machines, without you having to worry about how to implement that parallelism [[2](ch03.html#Hellerstein2010)]. In a hand-coded algorithm it would be a lot of work to implement such parallel execution yourself.
|
||||
|
||||
|
||||
|
||||
@ -39,11 +56,11 @@ For example, a database might be able to execute a declarative query in parallel
|
||||
|
||||
## 關係模型與文件模型
|
||||
|
||||
The best-known data model today is probably that of SQL, based on the relational model proposed by Edgar Codd in 1970 [[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Codd1970)]: data is organized into *relations* (called *tables* in SQL), where each relation is an unordered collection of *tuples* (*rows* in SQL).
|
||||
The best-known data model today is probably that of SQL, based on the relational model proposed by Edgar Codd in 1970 [[3](ch03.html#Codd1970)]: data is organized into *relations* (called *tables* in SQL), where each relation is an unordered collection of *tuples* (*rows* in SQL).
|
||||
|
||||
The relational model was originally a theoretical proposal, and many people at the time doubted whether it could be implemented efficiently. However, by the mid-1980s, relational database management systems (RDBMS) and SQL had become the tools of choice for most people who needed to store and query data with some kind of regular structure. Many data management use cases are still dominated by relational data decades later—for example, business analytics (see [“Stars and Snowflakes: Schemas for Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics)).
|
||||
The relational model was originally a theoretical proposal, and many people at the time doubted whether it could be implemented efficiently. However, by the mid-1980s, relational database management systems (RDBMS) and SQL had become the tools of choice for most people who needed to store and query data with some kind of regular structure. Many data management use cases are still dominated by relational data decades later—for example, business analytics (see [“Stars and Snowflakes: Schemas for Analytics”](ch03.html#sec_datamodels_analytics)).
|
||||
|
||||
Over the years, there have been many competing approaches to data storage and querying. In the 1970s and early 1980s, the *network model* and the *hierarchical model* were the main alternatives, but the relational model came to dominate them. Object databases came and went again in the late 1980s and early 1990s. XML databases appeared in the early 2000s, but have only seen niche adoption. Each competitor to the relational model generated a lot of hype in its time, but it never lasted [[4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Stonebraker2005around)]. Instead, SQL has grown to incorporate other data types besides its relational core—for example, adding support for XML, JSON, and graph data [[5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Winand2015)].
|
||||
Over the years, there have been many competing approaches to data storage and querying. In the 1970s and early 1980s, the *network model* and the *hierarchical model* were the main alternatives, but the relational model came to dominate them. Object databases came and went again in the late 1980s and early 1990s. XML databases appeared in the early 2000s, but have only seen niche adoption. Each competitor to the relational model generated a lot of hype in its time, but it never lasted [[4](ch03.html#Stonebraker2005around)]. Instead, SQL has grown to incorporate other data types besides its relational core—for example, adding support for XML, JSON, and graph data [[5](ch03.html#Winand2015)].
|
||||
|
||||
In the 2010s, *NoSQL* was the latest buzzword that tried to overthrow the dominance of relational databases. NoSQL refers not to a single technology, but a loose set of ideas around new data models, schema flexibility, scalability, and a move towards open source licensing models. Some databases branded themselves as *NewSQL*, as they aim to provide the scalability of NoSQL systems along with the data model and transactional guarantees of traditional relational databases. The NoSQL and NewSQL ideas have been very influential in the design of data systems, but as the principles have become widely adopted, use of those terms has faded.
|
||||
|
||||
@ -53,22 +70,27 @@ The pros and cons of document and relational data have been debated extensively;
|
||||
|
||||
### 物件關係不匹配
|
||||
|
||||
目前大多數應用程式開發都使用面向物件的程式語言來開發,這導致了對 SQL 資料模型的普遍批評:如果資料儲存在關係表中,那麼需要一個笨拙的轉換層,處於應用程式程式碼中的物件和表,行,列的資料庫模型之間。模型之間的不連貫有時被稱為 **阻抗不匹配(impedance mismatch)**[^i]。
|
||||
|
||||
|
||||
Much application development today is done in object-oriented programming languages, which leads to a common criticism of the SQL data model: if data is stored in relational tables, an awkward translation layer is required between the objects in the application code and the database model of tables, rows, and columns. The disconnect between the models is sometimes called an *impedance mismatch*.
|
||||
|
||||
[^i]: 一個從電子學借用的術語。每個電路的輸入和輸出都有一定的阻抗(交流電阻)。當你將一個電路的輸出連線到另一個電路的輸入時,如果兩個電路的輸出和輸入阻抗匹配,則連線上的功率傳輸將被最大化。阻抗不匹配會導致訊號反射及其他問題。
|
||||
|
||||
> **注意**
|
||||
>
|
||||
> The term *impedance mismatch* is borrowed from electronics. Every electric circuit has a certain impedance (resistance to alternating current) on its inputs and outputs. When you connect one circuit’s output to another one’s input, the power transfer across the connection is maximized if the output and input impedances of the two circuits match. An impedance mismatch can lead to signal reflections and other troubles.
|
||||
|
||||
#### 物件關係對映(ORM)
|
||||
|
||||
Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of boilerplate code required for this translation layer, but they are often criticized [[6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Fowler2012)]. Some commonly cited problems are:
|
||||
Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of boilerplate code required for this translation layer, but they are often criticized [[6](ch03.html#Fowler2012)]. Some commonly cited problems are:
|
||||
|
||||
- ORMs are complex and can’t completely hide the differences between the two models, so developers still end up having to think about both the relational and the object representations of the data.
|
||||
- ORMs are generally only used for OLTP app development (see [“Characterizing Analytical and Operational Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_oltp)); data engineers making the data available for analytics purposes still need to work with the underlying relational representation, so the design of the relational schema still matters when using an ORM.
|
||||
- ORMs are generally only used for OLTP app development (see [“Characterizing Analytical and Operational Systems”](ch01.html#sec_introduction_oltp)); data engineers making the data available for analytics purposes still need to work with the underlying relational representation, so the design of the relational schema still matters when using an ORM.
|
||||
- Many ORMs work only with relational OLTP databases. Organizations with diverse data systems such as search engines, graph databases, and NoSQL systems might find ORM support lacking.
|
||||
- Some ORMs generate relational schemas automatically, but these might be awkward for the users who are accessing the relational data directly, and they might be inefficient on the underlying database. Customizing the ORM’s schema and query generation can be complex and negate the benefit of using the ORM in the first place.
|
||||
- ORMs often come with schema migration tools that update database schemas as model definitions change. Such tools are handy, but should be used with caution. Migrations on large or high-traffic tables can lock the entire table for an extended amount of time, resulting in downtime. Many operations teams prefer to run schema migrations manually, incrementally, during off peak hours, or with specialized tools. Safe schema migrations are discussed further in [“Schema flexibility in the document model”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_schema_flexibility).
|
||||
- ORMs make it easy to accidentally write inefficient queries, such as the *N+1 query problem* [[7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Mihalcea2023)]. For example, say you want to display a list of user comments on a page, so you perform one query that returns *N* comments, each containing the ID of its author. To show the name of the comment author you need to look up the ID in the users table. In hand-written SQL you would probably perform this join in the query and return the author name along with each comment, but with an ORM you might end up making a separate query on the users table for each of the *N* comments to look up its author, resulting in *N*+1 database queries in total, which is slower than performing the join in the database. To avoid this problem, you may need to tell the ORM to fetch the author information at the same time as fetching the comments.
|
||||
- ORMs often come with schema migration tools that update database schemas as model definitions change. Such tools are handy, but should be used with caution. Migrations on large or high-traffic tables can lock the entire table for an extended amount of time, resulting in downtime. Many operations teams prefer to run schema migrations manually, incrementally, during off peak hours, or with specialized tools. Safe schema migrations are discussed further in [“Schema flexibility in the document model”](ch03.html#sec_datamodels_schema_flexibility).
|
||||
- ORMs make it easy to accidentally write inefficient queries, such as the *N+1 query problem* [[7](ch03.html#Mihalcea2023)]. For example, say you want to display a list of user comments on a page, so you perform one query that returns *N* comments, each containing the ID of its author. To show the name of the comment author you need to look up the ID in the users table. In hand-written SQL you would probably perform this join in the query and return the author name along with each comment, but with an ORM you might end up making a separate query on the users table for each of the *N* comments to look up its author, resulting in *N*+1 database queries in total, which is slower than performing the join in the database. To avoid this problem, you may need to tell the ORM to fetch the author information at the same time as fetching the comments.
|
||||
|
||||
Nevertheless, ORMs also have advantages:
|
||||
|
||||
@ -78,15 +100,15 @@ Nevertheless, ORMs also have advantages:
|
||||
|
||||
#### The document data model for one-to-many relationships
|
||||
|
||||
Not all data lends itself well to a relational representation; let’s look at an example to explore a limitation of the relational model. [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational) illustrates how a résumé (a LinkedIn profile) could be expressed in a relational schema. The profile as a whole can be identified by a unique identifier, `user_id`. Fields like `first_name` and `last_name` appear exactly once per user, so they can be modeled as columns on the `users` table.
|
||||
Not all data lends itself well to a relational representation; let’s look at an example to explore a limitation of the relational model. [Figure 3-1](ch03.html#fig_obama_relational) illustrates how a résumé (a LinkedIn profile) could be expressed in a relational schema. The profile as a whole can be identified by a unique identifier, `user_id`. Fields like `first_name` and `last_name` appear exactly once per user, so they can be modeled as columns on the `users` table.
|
||||
|
||||
Most people have had more than one job in their career (positions), and people may have varying numbers of periods of education and any number of pieces of contact information. One way of representing such *one-to-many relationships* is to put positions, education, and contact information in separate tables, with a foreign key reference to the `users` table, as in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational).
|
||||
Most people have had more than one job in their career (positions), and people may have varying numbers of periods of education and any number of pieces of contact information. One way of representing such *one-to-many relationships* is to put positions, education, and contact information in separate tables, with a foreign key reference to the `users` table, as in [Figure 3-1](ch03.html#fig_obama_relational).
|
||||
|
||||

|
||||
|
||||
> Figure 3-1. Representing a LinkedIn profile using a relational schema.
|
||||
|
||||
Another way of representing the same information, which is perhaps more natural and maps more closely to an object structure in application code, is as a JSON document as shown in [Example 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_json).
|
||||
Another way of representing the same information, which is perhaps more natural and maps more closely to an object structure in application code, is as a JSON document as shown in [Example 3-1](ch03.html#fig_obama_json).
|
||||
|
||||
> Example 3-1. Representing a LinkedIn profile as a JSON document
|
||||
|
||||
@ -113,11 +135,11 @@ Another way of representing the same information, which is perhaps more natural
|
||||
}
|
||||
```
|
||||
|
||||
Some developers feel that the JSON model reduces the impedance mismatch between the application code and the storage layer. However, as we shall see in [Link to Come], there are also problems with JSON as a data encoding format. The lack of a schema is often cited as an advantage; we will discuss this in [“Schema flexibility in the document model”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_schema_flexibility).
|
||||
Some developers feel that the JSON model reduces the impedance mismatch between the application code and the storage layer. However, as we shall see in [Link to Come], there are also problems with JSON as a data encoding format. The lack of a schema is often cited as an advantage; we will discuss this in [“Schema flexibility in the document model”](ch03.html#sec_datamodels_schema_flexibility).
|
||||
|
||||
The JSON representation has better *locality* than the multi-table schema in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational) (see [“Data locality for reads and writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_document_locality)). If you want to fetch a profile in the relational example, you need to either perform multiple queries (query each table by `user_id`) or perform a messy multi-way join between the `users` table and its subordinate tables [[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Schauder2023)]. In the JSON representation, all the relevant information is in one place, making the query both faster and simpler.
|
||||
The JSON representation has better *locality* than the multi-table schema in [Figure 3-1](ch03.html#fig_obama_relational) (see [“Data locality for reads and writes”](ch03.html#sec_datamodels_document_locality)). If you want to fetch a profile in the relational example, you need to either perform multiple queries (query each table by `user_id`) or perform a messy multi-way join between the `users` table and its subordinate tables [[8](ch03.html#Schauder2023)]. In the JSON representation, all the relevant information is in one place, making the query both faster and simpler.
|
||||
|
||||
The one-to-many relationships from the user profile to the user’s positions, educational history, and contact information imply a tree structure in the data, and the JSON representation makes this tree structure explicit (see [Figure 3-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_json_tree)).
|
||||
The one-to-many relationships from the user profile to the user’s positions, educational history, and contact information imply a tree structure in the data, and the JSON representation makes this tree structure explicit (see [Figure 3-2](ch03.html#fig_json_tree)).
|
||||
|
||||

|
||||
|
||||
@ -125,7 +147,7 @@ The one-to-many relationships from the user profile to the user’s positions, e
|
||||
|
||||
> **注意**
|
||||
>
|
||||
> This type of relationship is sometimes called *one-to-few* rather than *one-to-many*, since a résumé typically has a small number of positions [[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Zola2014), [10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Andrews2023)]. In sitations where there may be a genuinely large number of related items—say, comments on a celebrity’s social media post, of which there could be many thousands—embedding them all in the same document may be too unwieldy, so the relational approach in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational) is preferable.
|
||||
> This type of relationship is sometimes called *one-to-few* rather than *one-to-many*, since a résumé typically has a small number of positions [[9](ch03.html#Zola2014), [10](ch03.html#Andrews2023)]. In sitations where there may be a genuinely large number of related items—say, comments on a celebrity’s social media post, of which there could be many thousands—embedding them all in the same document may be too unwieldy, so the relational approach in [Figure 3-1](ch03.html#fig_obama_relational) is preferable.
|
||||
|
||||
|
||||
|
||||
@ -134,7 +156,7 @@ The one-to-many relationships from the user profile to the user’s positions, e
|
||||
|
||||
### 正規化化,反正規化化,連線
|
||||
|
||||
In [Example 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_json) in the preceding section, `region_id` is given as an ID, not as the plain-text string `"Washington, DC, United States"`. Why?
|
||||
In [Example 3-1](ch03.html#fig_obama_json) in the preceding section, `region_id` is given as an ID, not as the plain-text string `"Washington, DC, United States"`. Why?
|
||||
|
||||
If the user interface has a free-text field for entering the region, it makes sense to store it as a plain-text string. But there are advantages to having standardized lists of geographic regions, and letting users choose from a drop-down list or autocompleter:
|
||||
|
||||
@ -171,28 +193,28 @@ db.users.aggregate([
|
||||
])
|
||||
```
|
||||
|
||||
#### Trade-offs of normalization
|
||||
#### 正規化化的利弊權衡
|
||||
|
||||
In the résumé example, while the `region_id` field is a reference into a standardized set of regions, the name of the `organization` (the company or government where the person worked) and `school_name` (where they studied) are just strings. This representation is denormalized: many people may have worked at the same company, but there is no ID linking them.
|
||||
|
||||
Perhaps the organization and school should be entities instead, and the profile should reference their IDs instead of their names? The same arguments for referencing the ID of a region also apply here. For example, say we wanted to include the logo of the school or company in addition to their name:
|
||||
|
||||
- In a denormalized representation, we would include the image URL of the logo on every individual person’s profile; this makes the JSON document self-contained, but it creates a headache if we ever need to change the logo, because we now need to find all of the occurrences of the old URL and update them [[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Zola2014)].
|
||||
- In a denormalized representation, we would include the image URL of the logo on every individual person’s profile; this makes the JSON document self-contained, but it creates a headache if we ever need to change the logo, because we now need to find all of the occurrences of the old URL and update them [[9](ch03.html#Zola2014)].
|
||||
- In a normalized representation, we would create an entity representing an organization or school, and store its name, logo URL, and perhaps other attributes (description, news feed, etc.) once on that entity. Every résumé that mentions the organization would then simply reference its ID, and updating the logo is easy.
|
||||
|
||||
As a general principle, normalized data is usually faster to write (since there is only one copy), but slower to query (since it requires joins); denormalized data is usually faster to read (fewer joins), but more expensive to write (more copies to update). You might find it helpful to view denormalization as a form of derived data ([“Systems of Record and Derived Data”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_derived)), since you need to set up a process for updating the redundant copies of the data.
|
||||
As a general principle, normalized data is usually faster to write (since there is only one copy), but slower to query (since it requires joins); denormalized data is usually faster to read (fewer joins), but more expensive to write (more copies to update). You might find it helpful to view denormalization as a form of derived data ([“Systems of Record and Derived Data”](ch01.html#sec_introduction_derived)), since you need to set up a process for updating the redundant copies of the data.
|
||||
|
||||
Besides the cost of performing all these updates, you also need to consider the consistency of the database if a process crashes halfway through making its updates. Databases that offer atomic transactions (see [Link to Come]) make it easier to remain consistent, but not all databases offer atomicity across multiple documents. It is also possible to ensure consistency through stream processing, which we discuss in [Link to Come].
|
||||
|
||||
Normalization tends to be better for OLTP systems, where both reads and updates need to be fast; analytics systems often fare better with denormalized data, since they perform updates in bulk, and the performance of read-only queries is the dominant concern. Moreover, in systems of small to moderate scale, a normalized data model is often best, because you don’t have to worry about keeping multiple copies of the data consistent with each other, and the cost of performing joins is acceptable. However, in very large-scale systems, the cost of joins can become problematic.
|
||||
|
||||
#### Denormalization in the social networking case study
|
||||
#### 在社交網路案例研究中的反正規化化
|
||||
|
||||
In [“Case Study: Social Network Home Timelines”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_twitter) we compared a normalized representation ([Figure 2-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#fig_twitter_relational)) and a denormalized one (precomputed, materialized timelines): here, the join between `posts` and `follows` was too expensive, and the materialized timeline is a cache of the result of that join. The fan-out process that inserts a new post into followers’ timelines was our way of keeping the denormalized representation consistent.
|
||||
In [“Case Study: Social Network Home Timelines”](ch02.html#sec_introduction_twitter) we compared a normalized representation ([Figure 2-1](ch02.html#fig_twitter_relational)) and a denormalized one (precomputed, materialized timelines): here, the join between `posts` and `follows` was too expensive, and the materialized timeline is a cache of the result of that join. The fan-out process that inserts a new post into followers’ timelines was our way of keeping the denormalized representation consistent.
|
||||
|
||||
However, the implementation of materialized timelines at X (formerly Twitter) does not store the actual text of each post: each entry actually only stores the post ID, the ID of the user who posted it, and a little bit of extra information to identify reposts and replies [[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Krikorian2012_ch3)]. In other words, it is a precomputed result of (approximately) the following query:
|
||||
However, the implementation of materialized timelines at X (formerly Twitter) does not store the actual text of each post: each entry actually only stores the post ID, the ID of the user who posted it, and a little bit of extra information to identify reposts and replies [[11](ch03.html#Krikorian2012_ch3)]. In other words, it is a precomputed result of (approximately) the following query:
|
||||
|
||||
```
|
||||
```sql
|
||||
SELECT posts.id, posts.sender_id FROM posts
|
||||
JOIN follows ON posts.sender_id = follows.followee_id
|
||||
WHERE follows.follower_id = current_user
|
||||
@ -200,7 +222,7 @@ SELECT posts.id, posts.sender_id FROM posts
|
||||
LIMIT 1000
|
||||
```
|
||||
|
||||
This means that whenever the timeline is read, the service still needs to perform two joins: look up the post ID to fetch the actual post content (as well as statistics such as the number of likes and replies), and look up the sender’s profile by ID (to get their username, profile picture, and other details). This process of looking up the human-readable information by ID is called *hydrating* the IDs, and it is essentially a join performed in application code [[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Krikorian2012_ch3)].
|
||||
This means that whenever the timeline is read, the service still needs to perform two joins: look up the post ID to fetch the actual post content (as well as statistics such as the number of likes and replies), and look up the sender’s profile by ID (to get their username, profile picture, and other details). This process of looking up the human-readable information by ID is called *hydrating* the IDs, and it is essentially a join performed in application code [[11](ch03.html#Krikorian2012_ch3)].
|
||||
|
||||
The reason for storing only IDs in the precomputed timeline is that the data they refer to is fast-changing: the number of likes and replies may change multiple times per second on a popular post, and some users regularly change their username or profile photo. Since the timeline should show the latest like count and profile picture when it is viewed, it would not make sense to denormalize this information into the materialized timeline. Moreover, the storage cost would be increased significantly by such denormalization.
|
||||
|
||||
@ -216,15 +238,15 @@ If you need to decide whether to denormalize something in your application, the
|
||||
|
||||
### 多對一與多對多關係
|
||||
|
||||
While `positions` and `education` in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational) are examples of one-to-many or one-to-few relationships (one résumé has several positions, but each position belongs only to one résumé), the `region_id` field is an example of a *many-to-one* relationship (many people live in the same region, but we assume that each person lives in only one region at any one time).
|
||||
While `positions` and `education` in [Figure 3-1](ch03.html#fig_obama_relational) are examples of one-to-many or one-to-few relationships (one résumé has several positions, but each position belongs only to one résumé), the `region_id` field is an example of a *many-to-one* relationship (many people live in the same region, but we assume that each person lives in only one region at any one time).
|
||||
|
||||
If we introduce entities for organizations and schools, and reference them by ID from the résumé, then we also have *many-to-many* relationships (one person has worked for several organizations, and an organization has several past or present employees). In a relational model, such a relationship is usually represented as an *associative table* or *join table*, as shown in [Figure 3-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_m2m_rel): each position associates one user ID with one organization ID.
|
||||
If we introduce entities for organizations and schools, and reference them by ID from the résumé, then we also have *many-to-many* relationships (one person has worked for several organizations, and an organization has several past or present employees). In a relational model, such a relationship is usually represented as an *associative table* or *join table*, as shown in [Figure 3-3](ch03.html#fig_datamodels_m2m_rel): each position associates one user ID with one organization ID.
|
||||
|
||||

|
||||
|
||||
> Figure 3-3. Many-to-many relationships in the relational model.
|
||||
|
||||
Many-to-one and many-to-many relationships do not easily fit within one self-contained JSON document; they lend themselves more to a normalized representation. In a document model, one possible representation is given in [Example 3-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_m2m_json) and illustrated in [Figure 3-4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_many_to_many): the data within each dotted rectangle can be grouped into one document, but the links to organizations and schools are best represented as references to other documents.
|
||||
Many-to-one and many-to-many relationships do not easily fit within one self-contained JSON document; they lend themselves more to a normalized representation. In a document model, one possible representation is given in [Example 3-2](ch03.html#fig_datamodels_m2m_json) and illustrated in [Figure 3-4](ch03.html#fig_datamodels_many_to_many): the data within each dotted rectangle can be grouped into one document, but the links to organizations and schools are best represented as references to other documents.
|
||||
|
||||
> Example 3-2. A résumé that references organizations by ID.
|
||||
|
||||
@ -247,15 +269,15 @@ Many-to-one and many-to-many relationships do not easily fit within one self-con
|
||||
|
||||
Many-to-many relationships often need to be queried in “both directions”: for example, finding all of the organizations that a particular person has worked for, and finding all of the people who have worked at a particular organization. One way of enabling such queries is to store ID references on both sides, i.e., a résumé includes the ID of each organization where the person has worked, and the organization document includes the IDs of the résumés that mention that organization. This representation is denormalized, since the relationship is stored in two places, which could become inconsistent with each other.
|
||||
|
||||
A normalized representation stores the relationship in only one place, and relies on *secondary indexes* (which we discuss in [Link to Come]) to allow the relationship to be efficiently queried in both directions. In the relational schema of [Figure 3-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_m2m_rel), we would tell the database to create indexes on both the `user_id` and the `org_id` columns of the `positions` table.
|
||||
A normalized representation stores the relationship in only one place, and relies on *secondary indexes* (which we discuss in [Link to Come]) to allow the relationship to be efficiently queried in both directions. In the relational schema of [Figure 3-3](ch03.html#fig_datamodels_m2m_rel), we would tell the database to create indexes on both the `user_id` and the `org_id` columns of the `positions` table.
|
||||
|
||||
In the document model of [Example 3-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_m2m_json), the database needs to index the `org_id` field of objects inside the `positions` array. Many document databases and relational databases with JSON support are able to create such indexes on values inside a document.
|
||||
In the document model of [Example 3-2](ch03.html#fig_datamodels_m2m_json), the database needs to index the `org_id` field of objects inside the `positions` array. Many document databases and relational databases with JSON support are able to create such indexes on values inside a document.
|
||||
|
||||
#### Stars and Snowflakes: Schemas for Analytics
|
||||
|
||||
Data warehouses (see [“Data Warehousing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_dwh)) are usually relational, and there are a few widely-used conventions for the structure of tables in a data warehouse: a *star schema*, *snowflake schema*, *dimensional modeling* [[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Kimball2013_ch3)], and *one big table* (OBT). These structures are optimized for the needs of business analysts. ETL processes translate data from operational systems into this schema.
|
||||
Data warehouses (see [“Data Warehousing”](ch01.html#sec_introduction_dwh)) are usually relational, and there are a few widely-used conventions for the structure of tables in a data warehouse: a *star schema*, *snowflake schema*, *dimensional modeling* [[12](ch03.html#Kimball2013_ch3)], and *one big table* (OBT). These structures are optimized for the needs of business analysts. ETL processes translate data from operational systems into this schema.
|
||||
|
||||
The example schema in [Figure 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dwh_schema) shows a data warehouse that might be found at a grocery retailer. At the center of the schema is a so-called *fact table* (in this example, it is called `fact_sales`). Each row of the fact table represents an event that occurred at a particular time (here, each row represents a customer’s purchase of a product). If we were analyzing website traffic rather than retail sales, each row might represent a page view or a click by a user.
|
||||
The example schema in [Figure 3-5](ch03.html#fig_dwh_schema) shows a data warehouse that might be found at a grocery retailer. At the center of the schema is a so-called *fact table* (in this example, it is called `fact_sales`). Each row of the fact table represents an event that occurred at a particular time (here, each row represents a customer’s purchase of a product). If we were analyzing website traffic rather than retail sales, each row might represent a page view or a click by a user.
|
||||
|
||||

|
||||
|
||||
@ -265,41 +287,41 @@ Usually, facts are captured as individual events, because this allows maximum fl
|
||||
|
||||
Some of the columns in the fact table are attributes, such as the price at which the product was sold and the cost of buying it from the supplier (allowing the profit margin to be calculated). Other columns in the fact table are foreign key references to other tables, called *dimension tables*. As each row in the fact table represents an event, the dimensions represent the *who*, *what*, *where*, *when*, *how*, and *why* of the event.
|
||||
|
||||
For example, in [Figure 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dwh_schema), one of the dimensions is the product that was sold. Each row in the `dim_product` table represents one type of product that is for sale, including its stock-keeping unit (SKU), description, brand name, category, fat content, package size, etc. Each row in the `fact_sales` table uses a foreign key to indicate which product was sold in that particular transaction. Queries often involve multiple joins to multiple dimension tables.
|
||||
For example, in [Figure 3-5](ch03.html#fig_dwh_schema), one of the dimensions is the product that was sold. Each row in the `dim_product` table represents one type of product that is for sale, including its stock-keeping unit (SKU), description, brand name, category, fat content, package size, etc. Each row in the `fact_sales` table uses a foreign key to indicate which product was sold in that particular transaction. Queries often involve multiple joins to multiple dimension tables.
|
||||
|
||||
Even date and time are often represented using dimension tables, because this allows additional information about dates (such as public holidays) to be encoded, allowing queries to differentiate between sales on holidays and non-holidays.
|
||||
|
||||
[Figure 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dwh_schema) is an example of a star schema. The name comes from the fact that when the table relationships are visualized, the fact table is in the middle, surrounded by its dimension tables; the connections to these tables are like the rays of a star.
|
||||
[Figure 3-5](ch03.html#fig_dwh_schema) is an example of a star schema. The name comes from the fact that when the table relationships are visualized, the fact table is in the middle, surrounded by its dimension tables; the connections to these tables are like the rays of a star.
|
||||
|
||||
A variation of this template is known as the *snowflake schema*, where dimensions are further broken down into subdimensions. For example, there could be separate tables for brands and product categories, and each row in the `dim_product` table could reference the brand and category as foreign keys, rather than storing them as strings in the `dim_product` table. Snowflake schemas are more normalized than star schemas, but star schemas are often preferred because they are simpler for analysts to work with [[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Kimball2013_ch3)].
|
||||
A variation of this template is known as the *snowflake schema*, where dimensions are further broken down into subdimensions. For example, there could be separate tables for brands and product categories, and each row in the `dim_product` table could reference the brand and category as foreign keys, rather than storing them as strings in the `dim_product` table. Snowflake schemas are more normalized than star schemas, but star schemas are often preferred because they are simpler for analysts to work with [[12](ch03.html#Kimball2013_ch3)].
|
||||
|
||||
In a typical data warehouse, tables are often quite wide: fact tables often have over 100 columns, sometimes several hundred. Dimension tables can also be wide, as they include all the metadata that may be relevant for analysis—for example, the `dim_store` table may include details of which services are offered at each store, whether it has an in-store bakery, the square footage, the date when the store was first opened, when it was last remodeled, how far it is from the nearest highway, etc.
|
||||
|
||||
A star or snowflake schema consists mostly of many-to-one relationships (e.g., many sales occur for one particular product, in one particular store), represented as the fact table having foreign keys into dimension tables, or dimensions into sub-dimensions. In principle, other types of relationship could exist, but they are often denormalized in order to simplify queries. For example, if a customer buys several different products at once, that multi-item transaction is not represented explicitly; instead, there is a separate row in the fact table for each product purchased, and those facts all just happen to have the same customer ID, store ID, and timestamp.
|
||||
|
||||
Some data warehouse schemas take denormalization even further and leave out the dimension tables entirely, folding the information in the dimensions into denormalized columns on the fact table instead (essentially, precomputing the join between the fact table and the dimension tables). This approach is known as *one big table* (OBT), and while it requires more storage space, it sometimes enables faster queries [[13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Kaminsky2022)].
|
||||
Some data warehouse schemas take denormalization even further and leave out the dimension tables entirely, folding the information in the dimensions into denormalized columns on the fact table instead (essentially, precomputing the join between the fact table and the dimension tables). This approach is known as *one big table* (OBT), and while it requires more storage space, it sometimes enables faster queries [[13](ch03.html#Kaminsky2022)].
|
||||
|
||||
In the context of analytics, such denormalization is unproblematic, since the data typically represents a log of historical data that is not going to change (except maybe for occasionally correcting an error). The issues of data consistency and write overheads that occur with denormalization in OLTP systems are not as pressing in analytics.
|
||||
|
||||
#### When to Use Which Model
|
||||
#### 什麼時候用哪種模型?
|
||||
|
||||
The main arguments in favor of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the object model used by the application. The relational model counters by providing better support for joins, many-to-one, and many-to-many relationships. Let’s examine these arguments in more detail.
|
||||
|
||||
If the data in your application has a document-like structure (i.e., a tree of one-to-many relationships, where typically the entire tree is loaded at once), then it’s probably a good idea to use a document model. The relational technique of *shredding*—splitting a document-like structure into multiple tables (like `positions`, `education`, and `contact_info` in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational))—can lead to cumbersome schemas and unnecessarily complicated application code.
|
||||
If the data in your application has a document-like structure (i.e., a tree of one-to-many relationships, where typically the entire tree is loaded at once), then it’s probably a good idea to use a document model. The relational technique of *shredding*—splitting a document-like structure into multiple tables (like `positions`, `education`, and `contact_info` in [Figure 3-1](ch03.html#fig_obama_relational))—can lead to cumbersome schemas and unnecessarily complicated application code.
|
||||
|
||||
The document model has limitations: for example, you cannot refer directly to a nested item within a document, but instead you need to say something like “the second item in the list of positions for user 251”. If you do need to reference nested items, a relational approach works better, since you can refer to any item directly by its ID.
|
||||
|
||||
Some applications allow the user to choose the order of items: for example, imagine a to-do list or issue tracker where the user can drag and drop tasks to reorder them. The document model supports such applications well, because the items (or their IDs) can simply be stored in a JSON array to determine their order. In relational databases there isn’t a standard way of representing such reorderable lists, and various tricks are used: sorting by an integer column (requiring renumbering when you insert into the middle), a linked list of IDs, or fractional indexing [[14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Nelson2018), [15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Wallace2017), [16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Greenspan2020)].
|
||||
Some applications allow the user to choose the order of items: for example, imagine a to-do list or issue tracker where the user can drag and drop tasks to reorder them. The document model supports such applications well, because the items (or their IDs) can simply be stored in a JSON array to determine their order. In relational databases there isn’t a standard way of representing such reorderable lists, and various tricks are used: sorting by an integer column (requiring renumbering when you insert into the middle), a linked list of IDs, or fractional indexing [[14](ch03.html#Nelson2018), [15](ch03.html#Wallace2017), [16](ch03.html#Greenspan2020)].
|
||||
|
||||
#### Schema flexibility in the document model
|
||||
#### 文件模型中的模式靈活性
|
||||
|
||||
Most document databases, and the JSON support in relational databases, do not enforce any schema on the data in documents. XML support in relational databases usually comes with optional schema validation. No schema means that arbitrary keys and values can be added to a document, and when reading, clients have no guarantees as to what fields the documents may contain.
|
||||
|
||||
Document databases are sometimes called *schemaless*, but that’s misleading, as the code that reads the data usually assumes some kind of structure—i.e., there is an implicit schema, but it is not enforced by the database [[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Schemaless)]. A more accurate term is *schema-on-read* (the structure of the data is implicit, and only interpreted when the data is read), in contrast with *schema-on-write* (the traditional approach of relational databases, where the schema is explicit and the database ensures all data conforms to it when the data is written) [[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Awadallah2009)].
|
||||
Document databases are sometimes called *schemaless*, but that’s misleading, as the code that reads the data usually assumes some kind of structure—i.e., there is an implicit schema, but it is not enforced by the database [[17](ch03.html#Schemaless)]. A more accurate term is *schema-on-read* (the structure of the data is implicit, and only interpreted when the data is read), in contrast with *schema-on-write* (the traditional approach of relational databases, where the schema is explicit and the database ensures all data conforms to it when the data is written) [[18](ch03.html#Awadallah2009)].
|
||||
|
||||
Schema-on-read is similar to dynamic (runtime) type checking in programming languages, whereas schema-on-write is similar to static (compile-time) type checking. Just as the advocates of static and dynamic type checking have big debates about their relative merits [[19](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Odersky2013)], enforcement of schemas in database is a contentious topic, and in general there’s no right or wrong answer.
|
||||
Schema-on-read is similar to dynamic (runtime) type checking in programming languages, whereas schema-on-write is similar to static (compile-time) type checking. Just as the advocates of static and dynamic type checking have big debates about their relative merits [[19](ch03.html#Odersky2013)], enforcement of schemas in database is a contentious topic, and in general there’s no right or wrong answer.
|
||||
|
||||
The difference between the approaches is particularly noticeable in situations where an application wants to change the format of its data. For example, say you are currently storing each user’s full name in one field, and you instead want to store the first name and last name separately [[20](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Irwin2013)]. In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example:
|
||||
The difference between the approaches is particularly noticeable in situations where an application wants to change the format of its data. For example, say you are currently storing each user’s full name in one field, and you instead want to store the first name and last name separately [[20](ch03.html#Irwin2013)]. In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example:
|
||||
|
||||
```
|
||||
if (user && user.name && !user.first_name) {
|
||||
@ -318,7 +340,7 @@ UPDATE users SET first_name = substring_index(name, ' ', 1); -- MySQL
|
||||
|
||||
In most relational databases, adding a column with a default value is fast and unproblematic, even on large tables. However, running the `UPDATE` statement is likely to be slow on a large table, since every row needs to be rewritten, and other schema operations (such as changing the data type of a column) also typically require the entire table to be copied.
|
||||
|
||||
Various tools exist to allow this type of schema changes to be performed in the background without downtime [[21](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Percona2023), [22](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Noach2016), [23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Mukherjee2022), [24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#PerezAradros2023)], but performing such migrations on large databases remains operationally challenging. Complicated migrations can be avoided by only adding the `first_name` column with a default value of `NULL` (which is fast), and filling it in at read time, like you would with a document database.
|
||||
Various tools exist to allow this type of schema changes to be performed in the background without downtime [[21](ch03.html#Percona2023), [22](ch03.html#Noach2016), [23](ch03.html#Mukherjee2022), [24](ch03.html#PerezAradros2023)], but performing such migrations on large databases remains operationally challenging. Complicated migrations can be avoided by only adding the `first_name` column with a default value of `NULL` (which is fast), and filling it in at read time, like you would with a document database.
|
||||
|
||||
The schema-on-read approach is advantageous if the items in the collection don’t all have the same structure for some reason (i.e., the data is heterogeneous)—for example, because:
|
||||
|
||||
@ -329,17 +351,17 @@ In situations like these, a schema may hurt more than it helps, and schemaless d
|
||||
|
||||
#### Data locality for reads and writes
|
||||
|
||||
A document is usually stored as a single continuous string, encoded as JSON, XML, or a binary variant thereof (such as MongoDB’s BSON). If your application often needs to access the entire document (for example, to render it on a web page), there is a performance advantage to this *storage locality*. If data is split across multiple tables, like in [Figure 3-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_obama_relational), multiple index lookups are required to retrieve it all, which may require more disk seeks and take more time.
|
||||
A document is usually stored as a single continuous string, encoded as JSON, XML, or a binary variant thereof (such as MongoDB’s BSON). If your application often needs to access the entire document (for example, to render it on a web page), there is a performance advantage to this *storage locality*. If data is split across multiple tables, like in [Figure 3-1](ch03.html#fig_obama_relational), multiple index lookups are required to retrieve it all, which may require more disk seeks and take more time.
|
||||
|
||||
The locality advantage only applies if you need large parts of the document at the same time. The database typically needs to load the entire document, which can be wasteful if you only need to access a small part of a large document. On updates to a document, the entire document usually needs to be rewritten. For these reasons, it is generally recommended that you keep documents fairly small and avoid frequent small updates to a document.
|
||||
|
||||
However, the idea of storing related data together for locality is not limited to the document model. For example, Google’s Spanner database offers the same locality properties in a relational data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) within a parent table [[25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Corbett2012_ch2)]. Oracle allows the same, using a feature called *multi-table index cluster tables* [[26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#BurlesonCluster)]. The *column-family* concept in the Bigtable data model (used in Cassandra, HBase, and ScyllaDB), also known as a *wide-column* model, has a similar purpose of managing locality [[27](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Chang2006_ch2)].
|
||||
However, the idea of storing related data together for locality is not limited to the document model. For example, Google’s Spanner database offers the same locality properties in a relational data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) within a parent table [[25](ch03.html#Corbett2012_ch2)]. Oracle allows the same, using a feature called *multi-table index cluster tables* [[26](ch03.html#BurlesonCluster)]. The *column-family* concept in the Bigtable data model (used in Cassandra, HBase, and ScyllaDB), also known as a *wide-column* model, has a similar purpose of managing locality [[27](ch03.html#Chang2006_ch2)].
|
||||
|
||||
#### Query languages for documents
|
||||
#### 面向文件的查詢語言
|
||||
|
||||
Another difference between a relational and a document database is the language or API that you use to query it. Most relational databases are queried using SQL, but document databases are more varied. Some allow only key-value access by primary key, while others also offer secondary indexes to query for values inside documents, and some provide rich query languages.
|
||||
|
||||
XML databases are often queried using XQuery and XPath, which are designed to allow complex queries, including joins across multiple documents, and also format their results as XML [[28](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Walmsley2015)]. JSON Pointer [[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bryan2013)] and JSONPath [[30](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Goessner2024)] provide an equivalent to XPath for JSON. MongoDB’s aggregation pipeline, whose `$lookup` operator for joins we saw in [“Normalization, Denormalization, and Joins”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_normalization), is an example of a query language for collections of JSON documents.
|
||||
XML databases are often queried using XQuery and XPath, which are designed to allow complex queries, including joins across multiple documents, and also format their results as XML [[28](ch03.html#Walmsley2015)]. JSON Pointer [[29](ch03.html#Bryan2013)] and JSONPath [[30](ch03.html#Goessner2024)] provide an equivalent to XPath for JSON. MongoDB’s aggregation pipeline, whose `$lookup` operator for joins we saw in [“Normalization, Denormalization, and Joins”](ch03.html#sec_datamodels_normalization), is an example of a query language for collections of JSON documents.
|
||||
|
||||
Let’s look at another example to get a feel for this language—this time an aggregation, which is especially needed for analytics. Imagine you are a marine biologist, and you add an observation record to your database every time you see animals in the ocean. Now you want to generate a report saying how many sharks you have sighted per month. In PostgreSQL you might express that query like this:
|
||||
|
||||
@ -351,13 +373,13 @@ WHERE family = 'Sharks'
|
||||
GROUP BY observation_month;
|
||||
```
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO1-1)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO1-1)
|
||||
|
||||
The `date_trunc('month', timestamp)` function determines the calendar month containing `timestamp`, and returns another timestamp representing the beginning of that month. In other words, it rounds a timestamp down to the nearest month.
|
||||
|
||||
This query first filters the observations to only show species in the `Sharks` family, then groups the observations by the calendar month in which they occurred, and finally adds up the number of animals seen in all observations in that month. The same query can be expressed using MongoDB’s aggregation pipeline as follows:
|
||||
|
||||
```
|
||||
```mongodb-json
|
||||
db.observations.aggregate([
|
||||
{ $match: { family: "Sharks" } },
|
||||
{ $group: {
|
||||
@ -380,7 +402,7 @@ This convergence of the models is good news for application developers, because
|
||||
|
||||
> **注意**
|
||||
>
|
||||
> Codd’s original description of the relational model [[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Codd1970)] actually allowed something similar to JSON within a relational schema. He called it *nonsimple domains*. The idea was that a value in a row doesn’t have to just be a primitive datatype like a number or a string, but it could also be a nested relation (table)—so you can have an arbitrarily nested tree structure as a value, much like the JSON or XML support that was added to SQL over 30 years later.
|
||||
> Codd’s original description of the relational model [[3](ch03.html#Codd1970)] actually allowed something similar to JSON within a relational schema. He called it *nonsimple domains*. The idea was that a value in a row doesn’t have to just be a primitive datatype like a number or a string, but it could also be a nested relation (table)—so you can have an arbitrarily nested tree structure as a value, much like the JSON or XML support that was added to SQL over 30 years later.
|
||||
|
||||
|
||||
|
||||
@ -415,20 +437,20 @@ A graph consists of two kinds of objects: *vertices* (also known as *nodes* or *
|
||||
|
||||
Vertices are junctions, and edges represent the roads or railway lines between them.
|
||||
|
||||
Well-known algorithms can operate on these graphs: for example, map navigation apps search for the shortest path between two points in a road network, and PageRank can be used on the web graph to determine the popularity of a web page and thus its ranking in search results [[31](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Page1999)].
|
||||
Well-known algorithms can operate on these graphs: for example, map navigation apps search for the shortest path between two points in a road network, and PageRank can be used on the web graph to determine the popularity of a web page and thus its ranking in search results [[31](ch03.html#Page1999)].
|
||||
|
||||
Graphs can be represented in several different ways. In the *adjacency list* model, each vertex stores the IDs of its neighbor vertices that are one edge away. Alternatively, you can use an *adjacency matrix*, a two-dimensional array where each row and each column corresponds to a vertex, where the value is zero when there is no edge between the row vertex and the column vertex, and where the value is one if there is an edge. The adjacency list is good for graph traversals, and the matrix is good for machine learning (see [“Dataframes, Matrices, and Arrays”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_dataframes)).
|
||||
Graphs can be represented in several different ways. In the *adjacency list* model, each vertex stores the IDs of its neighbor vertices that are one edge away. Alternatively, you can use an *adjacency matrix*, a two-dimensional array where each row and each column corresponds to a vertex, where the value is zero when there is no edge between the row vertex and the column vertex, and where the value is one if there is an edge. The adjacency list is good for graph traversals, and the matrix is good for machine learning (see [“Dataframes, Matrices, and Arrays”](ch03.html#sec_datamodels_dataframes)).
|
||||
|
||||
In the examples just given, all the vertices in a graph represent the same kind of thing (people, web pages, or road junctions, respectively). However, graphs are not limited to such *homogeneous* data: an equally powerful use of graphs is to provide a consistent way of storing completely different types of objects in a single database. For example:
|
||||
|
||||
- Facebook maintains a single graph with many different types of vertices and edges: vertices represent people, locations, events, checkins, and comments made by users; edges indicate which people are friends with each other, which checkin happened in which location, who commented on which post, who attended which event, and so on [[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bronson2013)].
|
||||
- Knowledge graphs are used by search engines to record facts about entities that often occur in search queries, such as organizations, people, and places [[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Noy2019)]. This information is obtained by crawling and analyzing the text on websites; some websites, such as Wikidata, also publish graph data in a structured form.
|
||||
- Facebook maintains a single graph with many different types of vertices and edges: vertices represent people, locations, events, checkins, and comments made by users; edges indicate which people are friends with each other, which checkin happened in which location, who commented on which post, who attended which event, and so on [[32](ch03.html#Bronson2013)].
|
||||
- Knowledge graphs are used by search engines to record facts about entities that often occur in search queries, such as organizations, people, and places [[33](ch03.html#Noy2019)]. This information is obtained by crawling and analyzing the text on websites; some websites, such as Wikidata, also publish graph data in a structured form.
|
||||
|
||||
There are several different, but related, ways of structuring and querying data in graphs. In this section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB [[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Feng2023)], and others [[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Besta2019)]) and the *triple-store* model (implemented by Datomic, AllegroGraph, Blazegraph, and others). These models are fairly similar in what they can express, and some graph databases (such as Amazon Neptune) support both models.
|
||||
There are several different, but related, ways of structuring and querying data in graphs. In this section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB [[34](ch03.html#Feng2023)], and others [[35](ch03.html#Besta2019)]) and the *triple-store* model (implemented by Datomic, AllegroGraph, Blazegraph, and others). These models are fairly similar in what they can express, and some graph databases (such as Amazon Neptune) support both models.
|
||||
|
||||
We will also look at four query languages for graphs (Cypher, SPARQL, Datalog, and GraphQL), as well as SQL support for querying graphs. Other graph query languages exist, such as Gremlin [[36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#TinkerPop2023)], but these will give us a representative overview.
|
||||
We will also look at four query languages for graphs (Cypher, SPARQL, Datalog, and GraphQL), as well as SQL support for querying graphs. Other graph query languages exist, such as Gremlin [[36](ch03.html#TinkerPop2023)], but these will give us a representative overview.
|
||||
|
||||
To illustrate these different languages and models, this section uses the graph shown in [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph) as running example. It could be taken from a social network or a genealogical database: it shows two people, Lucy from Idaho and Alain from Saint-Lô, France. They are married and living in London. Each person and each location is represented as a vertex, and the relationships between them as edges. This example will help demonstrate some queries that are easy in graph databases, but difficult in other models.
|
||||
To illustrate these different languages and models, this section uses the graph shown in [Figure 3-6](ch03.html#fig_datamodels_graph) as running example. It could be taken from a social network or a genealogical database: it shows two people, Lucy from Idaho and Alain from Saint-Lô, France. They are married and living in London. Each person and each location is represented as a vertex, and the relationships between them as edges. This example will help demonstrate some queries that are easy in graph databases, but difficult in other models.
|
||||
|
||||

|
||||
|
||||
@ -455,7 +477,7 @@ Each edge consists of:
|
||||
- A label to describe the kind of relationship between the two vertices
|
||||
- A collection of properties (key-value pairs)
|
||||
|
||||
You can think of a graph store as consisting of two relational tables, one for vertices and one for edges, as shown in [Example 3-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_sql_schema) (this schema uses the PostgreSQL `jsonb` datatype to store the properties of each vertex or edge). The head and tail vertex are stored for each edge; if you want the set of incoming or outgoing edges for a vertex, you can query the `edges` table by `head_vertex` or `tail_vertex`, respectively.
|
||||
You can think of a graph store as consisting of two relational tables, one for vertices and one for edges, as shown in [Example 3-3](ch03.html#fig_graph_sql_schema) (this schema uses the PostgreSQL `jsonb` datatype to store the properties of each vertex or edge). The head and tail vertex are stored for each edge; if you want the set of incoming or outgoing edges for a vertex, you can query the `edges` table by `head_vertex` or `tail_vertex`, respectively.
|
||||
|
||||
##### Example 3-3. Representing a property graph using a relational schema
|
||||
|
||||
@ -481,17 +503,17 @@ CREATE INDEX edges_heads ON edges (head_vertex);
|
||||
Some important aspects of this model are:
|
||||
|
||||
1. Any vertex can have an edge connecting it with any other vertex. There is no schema that restricts which kinds of things can or cannot be associated.
|
||||
2. Given any vertex, you can efficiently find both its incoming and its outgoing edges, and thus *traverse* the graph—i.e., follow a path through a chain of vertices—both forward and backward. (That’s why [Example 3-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_sql_schema) has indexes on both the `tail_vertex` and `head_vertex` columns.)
|
||||
2. Given any vertex, you can efficiently find both its incoming and its outgoing edges, and thus *traverse* the graph—i.e., follow a path through a chain of vertices—both forward and backward. (That’s why [Example 3-3](ch03.html#fig_graph_sql_schema) has indexes on both the `tail_vertex` and `head_vertex` columns.)
|
||||
3. By using different labels for different kinds of vertices and relationships, you can store several different kinds of information in a single graph, while still maintaining a clean data model.
|
||||
|
||||
The edges table is like the many-to-many associative table/join table we saw in [“Many-to-One and Many-to-Many Relationships”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_many_to_many), generalized to allow many different types of relationship to be stored in the same table. There may also be indexes on the labels and the properties, allowing vertices or edges with certain properties to be found efficiently.
|
||||
The edges table is like the many-to-many associative table/join table we saw in [“Many-to-One and Many-to-Many Relationships”](ch03.html#sec_datamodels_many_to_many), generalized to allow many different types of relationship to be stored in the same table. There may also be indexes on the labels and the properties, allowing vertices or edges with certain properties to be found efficiently.
|
||||
|
||||
> **Note**
|
||||
|
||||
> A limitation of graph models is that an edge can only associate two vertices with each other, whereas a relational join table can represent three-way or even higher-degree relationships by having multiple foreign key references on a single row. Such relationships can be represented in a graph by creating an additional vertex corresponding to each row of the join table, and edges to/from that vertex, or by using a *hypergraph*.
|
||||
|
||||
|
||||
Those features give graphs a great deal of flexibility for data modeling, as illustrated in [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph). The figure shows a few things that would be difficult to express in a traditional relational schema, such as different kinds of regional structures in different countries (France has *départements* and *régions*, whereas the US has *counties* and *states*), quirks of history such as a country within a country (ignoring for now the intricacies of sovereign states and nations), and varying granularity of data (Lucy’s current residence is specified as a city, whereas her place of birth is specified only at the level of a state).
|
||||
Those features give graphs a great deal of flexibility for data modeling, as illustrated in [Figure 3-6](ch03.html#fig_datamodels_graph). The figure shows a few things that would be difficult to express in a traditional relational schema, such as different kinds of regional structures in different countries (France has *départements* and *régions*, whereas the US has *counties* and *states*), quirks of history such as a country within a country (ignoring for now the intricacies of sovereign states and nations), and varying granularity of data (Lucy’s current residence is specified as a city, whereas her place of birth is specified only at the level of a state).
|
||||
|
||||
You could imagine extending the graph to also include many other facts about Lucy and Alain, or other people. For instance, you could use it to indicate any food allergies they have (by introducing a vertex for each allergen, and an edge between a person and an allergen to indicate an allergy), and link the allergens with a set of vertices that show which foods contain which substances. Then you could write a query to find out what is safe for each person to eat. Graphs are good for evolvability: as you add features to your application, a graph can easily be extended to accommodate changes in your application’s data structures.
|
||||
|
||||
@ -500,11 +522,11 @@ You could imagine extending the graph to also include many other facts about Luc
|
||||
|
||||
### Cypher查詢語言
|
||||
|
||||
*Cypher* is a query language for property graphs, originally created for the Neo4j graph database, and later developed into an open standard as *openCypher* [[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Francis2018)]. Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Feng2023)], Amazon Neptune, Apache AGE (with storage in PostgreSQL), and others. It is named after a character in the movie *The Matrix* and is not related to ciphers in cryptography [[38](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#EifremTweet)].
|
||||
*Cypher* is a query language for property graphs, originally created for the Neo4j graph database, and later developed into an open standard as *openCypher* [[37](ch03.html#Francis2018)]. Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [[34](ch03.html#Feng2023)], Amazon Neptune, Apache AGE (with storage in PostgreSQL), and others. It is named after a character in the movie *The Matrix* and is not related to ciphers in cryptography [[38](ch03.html#EifremTweet)].
|
||||
|
||||
[Example 3-4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_create) shows the Cypher query to insert the lefthand portion of [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph) into a graph database. The rest of the graph can be added similarly. Each vertex is given a symbolic name like `usa` or `idaho`. That name is not stored in the database, but only used internally within the query to create edges between the vertices, using an arrow notation: `(idaho) -[:WITHIN]-> (usa)` creates an edge labeled `WITHIN`, with `idaho` as the tail node and `usa` as the head node.
|
||||
[Example 3-4](ch03.html#fig_cypher_create) shows the Cypher query to insert the lefthand portion of [Figure 3-6](ch03.html#fig_datamodels_graph) into a graph database. The rest of the graph can be added similarly. Each vertex is given a symbolic name like `usa` or `idaho`. That name is not stored in the database, but only used internally within the query to create edges between the vertices, using an arrow notation: `(idaho) -[:WITHIN]-> (usa)` creates an edge labeled `WITHIN`, with `idaho` as the tail node and `usa` as the head node.
|
||||
|
||||
##### Example 3-4. A subset of the data in [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph), represented as a Cypher query
|
||||
> Example 3-4. A subset of the data in [Figure 3-6](ch03.html#fig_datamodels_graph), represented as a Cypher query
|
||||
|
||||
```
|
||||
CREATE
|
||||
@ -516,13 +538,13 @@ CREATE
|
||||
(lucy) -[:BORN_IN]-> (idaho)
|
||||
```
|
||||
|
||||
When all the vertices and edges of [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph) are added to the database, we can start asking interesting questions: for example, *find the names of all the people who emigrated from the United States to Europe*. That is, find all the vertices that have a `BORN_IN` edge to a location within the US, and also a `LIVING_IN` edge to a location within Europe, and return the `name` property of each of those vertices.
|
||||
When all the vertices and edges of [Figure 3-6](ch03.html#fig_datamodels_graph) are added to the database, we can start asking interesting questions: for example, *find the names of all the people who emigrated from the United States to Europe*. That is, find all the vertices that have a `BORN_IN` edge to a location within the US, and also a `LIVING_IN` edge to a location within Europe, and return the `name` property of each of those vertices.
|
||||
|
||||
[Example 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_query) shows how to express that query in Cypher. The same arrow notation is used in a `MATCH` clause to find patterns in the graph: `(person) -[:BORN_IN]-> ()` matches any two vertices that are related by an edge labeled `BORN_IN`. The tail vertex of that edge is bound to the variable `person`, and the head vertex is left unnamed.
|
||||
[Example 3-5](ch03.html#fig_cypher_query) shows how to express that query in Cypher. The same arrow notation is used in a `MATCH` clause to find patterns in the graph: `(person) -[:BORN_IN]-> ()` matches any two vertices that are related by an edge labeled `BORN_IN`. The tail vertex of that edge is bound to the variable `person`, and the head vertex is left unnamed.
|
||||
|
||||
##### Example 3-5. Cypher query to find people who emigrated from the US to Europe
|
||||
> Example 3-5. Cypher query to find people who emigrated from the US to Europe
|
||||
|
||||
```
|
||||
```cypher
|
||||
MATCH
|
||||
(person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (:Location {name:'United States'}),
|
||||
(person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (:Location {name:'Europe'})
|
||||
@ -546,7 +568,7 @@ But equivalently, you could start with the two `Location` vertices and work back
|
||||
|
||||
### SQL中的圖查詢
|
||||
|
||||
[Example 3-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_sql_schema) suggested that graph data can be represented in a relational database. But if we put graph data in a relational structure, can we also query it using SQL?
|
||||
[Example 3-3](ch03.html#fig_graph_sql_schema) suggested that graph data can be represented in a relational database. But if we put graph data in a relational structure, can we also query it using SQL?
|
||||
|
||||
The answer is yes, but with some difficulty. Every edge that you traverse in a graph query is effectively a join with the `edges` table. In a relational database, you usually know in advance which joins you need in your query. On the other hand, in a graph query, you may need to traverse a variable number of edges before you find the vertex you’re looking for—that is, the number of joins is not fixed in advance.
|
||||
|
||||
@ -554,11 +576,11 @@ In our example, that happens in the `() -[:WITHIN*0..]-> ()` pattern in the Cyph
|
||||
|
||||
In Cypher, `:WITHIN*0..` expresses that fact very concisely: it means “follow a `WITHIN` edge, zero or more times.” It is like the `*` operator in a regular expression.
|
||||
|
||||
Since SQL:1999, this idea of variable-length traversal paths in a query can be expressed using something called *recursive common table expressions* (the `WITH RECURSIVE` syntax). [Example 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_sql_query) shows the same query—finding the names of people who emigrated from the US to Europe—expressed in SQL using this technique. However, the syntax is very clumsy in comparison to Cypher.
|
||||
Since SQL:1999, this idea of variable-length traversal paths in a query can be expressed using something called *recursive common table expressions* (the `WITH RECURSIVE` syntax). [Example 3-6](ch03.html#fig_graph_sql_query) shows the same query—finding the names of people who emigrated from the US to Europe—expressed in SQL using this technique. However, the syntax is very clumsy in comparison to Cypher.
|
||||
|
||||
> Example 3-6. The same query as [Example 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_query), written in SQL using recursive common table expressions
|
||||
> Example 3-6. The same query as [Example 3-5](ch03.html#fig_cypher_query), written in SQL using recursive common table expressions
|
||||
|
||||
```postgresql
|
||||
```sql
|
||||
WITH RECURSIVE
|
||||
|
||||
-- in_usa is the set of vertex IDs of all locations within the United States
|
||||
@ -602,33 +624,33 @@ JOIN born_in_usa ON vertices.vertex_id = born_in_usa.vertex_id
|
||||
JOIN lives_in_europe ON vertices.vertex_id = lives_in_europe.vertex_id;
|
||||
```
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-1)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-1)
|
||||
|
||||
First find the vertex whose `name` property has the value `"United States"`, and make it the first element of the set of vertices `in_usa`.
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-2)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-2)
|
||||
|
||||
Follow all incoming `within` edges from vertices in the set `in_usa`, and add them to the same set, until all incoming `within` edges have been visited.
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-3)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-3)
|
||||
|
||||
Do the same starting with the vertex whose `name` property has the value `"Europe"`, and build up the set of vertices `in_europe`.
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-4)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-4)
|
||||
|
||||
For each of the vertices in the set `in_usa`, follow incoming `born_in` edges to find people who were born in some place within the United States.
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-5)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-5)
|
||||
|
||||
Similarly, for each of the vertices in the set `in_europe`, follow incoming `lives_in` edges to find people who live in Europe.
|
||||
|
||||
- [](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#co_data_models_and_query_languages_CO2-6)
|
||||
- [](ch03.html#co_data_models_and_query_languages_CO2-6)
|
||||
|
||||
Finally, intersect the set of people born in the USA with the set of people living in Europe, by joining them.
|
||||
|
||||
The fact that a 4-line Cypher query requires 31 lines in SQL shows how much of a difference the right choice of data model and query language can make. And this is just the beginning; there are more details to consider, e.g., around handling cycles, and choosing between breadth-first or depth-first traversal [[39](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Tisiot2021)]. Oracle has a different SQL extension for recursive queries, which it calls *hierarchical* [[40](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Goel2020)].
|
||||
The fact that a 4-line Cypher query requires 31 lines in SQL shows how much of a difference the right choice of data model and query language can make. And this is just the beginning; there are more details to consider, e.g., around handling cycles, and choosing between breadth-first or depth-first traversal [[39](ch03.html#Tisiot2021)]. Oracle has a different SQL extension for recursive queries, which it calls *hierarchical* [[40](ch03.html#Goel2020)].
|
||||
|
||||
However, the situation may be improving: at the time of writing, there are plans to add a graph query language called GQL to the SQL standard [[41](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Deutsch2022), [42](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Green2019)], which will provide a syntax inspired by Cypher, GSQL [[43](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Deutsch2018)], and PGQL [[44](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#vanRest2016)].
|
||||
However, the situation may be improving: at the time of writing, there are plans to add a graph query language called GQL to the SQL standard [[41](ch03.html#Deutsch2022), [42](ch03.html#Green2019)], which will provide a syntax inspired by Cypher, GSQL [[43](ch03.html#Deutsch2018)], and PGQL [[44](ch03.html#vanRest2016)].
|
||||
|
||||
|
||||
--------
|
||||
@ -641,16 +663,16 @@ In a triple-store, all information is stored in the form of very simple three-pa
|
||||
|
||||
The subject of a triple is equivalent to a vertex in a graph. The object is one of two things:
|
||||
|
||||
1. A value of a primitive datatype, such as a string or a number. In that case, the predicate and object of the triple are equivalent to the key and value of a property on the subject vertex. Using the example from [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph), (*lucy*, *birthYear*, *1989*) is like a vertex `lucy` with properties `{"birthYear": 1989}`.
|
||||
1. A value of a primitive datatype, such as a string or a number. In that case, the predicate and object of the triple are equivalent to the key and value of a property on the subject vertex. Using the example from [Figure 3-6](ch03.html#fig_datamodels_graph), (*lucy*, *birthYear*, *1989*) is like a vertex `lucy` with properties `{"birthYear": 1989}`.
|
||||
2. Another vertex in the graph. In that case, the predicate is an edge in the graph, the subject is the tail vertex, and the object is the head vertex. For example, in (*lucy*, *marriedTo*, *alain*) the subject and object *lucy* and *alain* are both vertices, and the predicate *marriedTo* is the label of the edge that connects them.
|
||||
|
||||
> **注意**
|
||||
>
|
||||
> To be precise, databases that offer a triple-like data model often need to store some additional metadata on each tuple. For example, AWS Neptune uses quads (4-tuples) by adding a graph ID to each triple [[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#NeptuneDataModel)]; Datomic uses 5-tuples, extending each triple with a transaction ID and a boolean to indicate deletion [[46](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#DatomicDataModel)]. Since these databases retain the basic *subject-predicate-object* structure explained above, this book nevertheless calls them triple-stores.
|
||||
> To be precise, databases that offer a triple-like data model often need to store some additional metadata on each tuple. For example, AWS Neptune uses quads (4-tuples) by adding a graph ID to each triple [[45](ch03.html#NeptuneDataModel)]; Datomic uses 5-tuples, extending each triple with a transaction ID and a boolean to indicate deletion [[46](ch03.html#DatomicDataModel)]. Since these databases retain the basic *subject-predicate-object* structure explained above, this book nevertheless calls them triple-stores.
|
||||
|
||||
[Example 3-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_n3_triples) shows the same data as in [Example 3-4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_create), written as triples in a format called *Turtle*, a subset of *Notation3* (*N3*) [[47](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Beckett2011)].
|
||||
[Example 3-7](ch03.html#fig_graph_n3_triples) shows the same data as in [Example 3-4](ch03.html#fig_cypher_create), written as triples in a format called *Turtle*, a subset of *Notation3* (*N3*) [[47](ch03.html#Beckett2011)].
|
||||
|
||||
> Example 3-7. A subset of the data in [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph), represented as Turtle triples
|
||||
> Example 3-7. A subset of the data in [Figure 3-6](ch03.html#fig_datamodels_graph), represented as Turtle triples
|
||||
|
||||
```
|
||||
@prefix : <urn:example:>.
|
||||
@ -672,9 +694,9 @@ _:namerica :type "continent".
|
||||
|
||||
In this example, vertices of the graph are written as `_:*someName*`. The name doesn’t mean anything outside of this file; it exists only because we otherwise wouldn’t know which triples refer to the same vertex. When the predicate represents an edge, the object is a vertex, as in `_:idaho :within _:usa`. When the predicate is a property, the object is a string literal, as in `_:usa :name "United States"`.
|
||||
|
||||
It’s quite repetitive to repeat the same subject over and over again, but fortunately you can use semicolons to say multiple things about the same subject. This makes the Turtle format quite readable: see [Example 3-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_n3_shorthand).
|
||||
It’s quite repetitive to repeat the same subject over and over again, but fortunately you can use semicolons to say multiple things about the same subject. This makes the Turtle format quite readable: see [Example 3-8](ch03.html#fig_graph_n3_shorthand).
|
||||
|
||||
> Example 3-8. A more concise way of writing the data in [Example 3-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_n3_triples)
|
||||
> Example 3-8. A more concise way of writing the data in [Example 3-7](ch03.html#fig_graph_n3_triples)
|
||||
|
||||
```
|
||||
@prefix : <urn:example:>.
|
||||
@ -686,17 +708,17 @@ _:namerica a :Location; :name "North America"; :type "continent".
|
||||
|
||||
#### The Semantic Web
|
||||
|
||||
Some of the research and development effort on triple stores was motivated by the *Semantic Web*, an early-2000s effort to facilitate internet-wide data exchange by publishing data not only as human-readable web pages, but also in a standardized, machine-readable format. Although the Semantic Web as originally envisioned did not succeed [[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Target2018), [49](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#MendelGleason2022)], the legacy of the Semantic Web project lives on in a couple of specific technologies: *linked data* standards such as JSON-LD [[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Sporny2014)], *ontologies* used in biomedical science [[51](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#MichiganOntologies)], Facebook’s Open Graph protocol [[52](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#OpenGraph)] (which is used for link unfurling [[53](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Haughey2015)]), knowledge graphs such as Wikidata, and standardized vocabularies for structured data maintained by [`schema.org`](https://schema.org/).
|
||||
Some of the research and development effort on triple stores was motivated by the *Semantic Web*, an early-2000s effort to facilitate internet-wide data exchange by publishing data not only as human-readable web pages, but also in a standardized, machine-readable format. Although the Semantic Web as originally envisioned did not succeed [[48](ch03.html#Target2018), [49](ch03.html#MendelGleason2022)], the legacy of the Semantic Web project lives on in a couple of specific technologies: *linked data* standards such as JSON-LD [[50](ch03.html#Sporny2014)], *ontologies* used in biomedical science [[51](ch03.html#MichiganOntologies)], Facebook’s Open Graph protocol [[52](ch03.html#OpenGraph)] (which is used for link unfurling [[53](ch03.html#Haughey2015)]), knowledge graphs such as Wikidata, and standardized vocabularies for structured data maintained by [`schema.org`](https://schema.org/).
|
||||
|
||||
Triple-stores are another Semantic Web technology that has found use outside of its original use case: even if you have no interest in the Semantic Web, triples can be a good internal data model for applications.
|
||||
|
||||
#### The RDF data model
|
||||
|
||||
The Turtle language we used in [Example 3-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_n3_shorthand) is actually a way of encoding data in the *Resource Description Framework* (RDF) [[54](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#W3CRDF)], a data model that was designed for the Semantic Web. RDF data can also be encoded in other ways, for example (more verbosely) in XML, as shown in [Example 3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_rdf_xml). Tools like Apache Jena can automatically convert between different RDF encodings.
|
||||
The Turtle language we used in [Example 3-8](ch03.html#fig_graph_n3_shorthand) is actually a way of encoding data in the *Resource Description Framework* (RDF) [[54](ch03.html#W3CRDF)], a data model that was designed for the Semantic Web. RDF data can also be encoded in other ways, for example (more verbosely) in XML, as shown in [Example 3-9](ch03.html#fig_graph_rdf_xml). Tools like Apache Jena can automatically convert between different RDF encodings.
|
||||
|
||||
> Example 3-9. The data of [Example 3-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graph_n3_shorthand), expressed using RDF/XML syntax
|
||||
> Example 3-9. The data of [Example 3-8](ch03.html#fig_graph_n3_shorthand), expressed using RDF/XML syntax
|
||||
|
||||
```
|
||||
```xml
|
||||
<rdf:RDF xmlns="urn:example:"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
|
||||
|
||||
@ -730,11 +752,11 @@ The URL `<http://my-company.com/namespace>` doesn’t necessarily need to resolv
|
||||
|
||||
#### SPARQL查詢語言
|
||||
|
||||
*SPARQL* is a query language for triple-stores using the RDF data model [[55](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Harris2013)]. (It is an acronym for *SPARQL Protocol and RDF Query Language*, pronounced “sparkle.”) It predates Cypher, and since Cypher’s pattern matching is borrowed from SPARQL, they look quite similar.
|
||||
*SPARQL* is a query language for triple-stores using the RDF data model [[55](ch03.html#Harris2013)]. (It is an acronym for *SPARQL Protocol and RDF Query Language*, pronounced “sparkle.”) It predates Cypher, and since Cypher’s pattern matching is borrowed from SPARQL, they look quite similar.
|
||||
|
||||
The same query as before—finding people who have moved from the US to Europe—is similarly concise in SPARQL as it is in Cypher (see [Example 3-10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_sparql_query)).
|
||||
The same query as before—finding people who have moved from the US to Europe—is similarly concise in SPARQL as it is in Cypher (see [Example 3-10](ch03.html#fig_sparql_query)).
|
||||
|
||||
> Example 3-10. The same query as [Example 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_query), expressed in SPARQL
|
||||
> Example 3-10. The same query as [Example 3-5](ch03.html#fig_cypher_query), expressed in SPARQL
|
||||
|
||||
```
|
||||
PREFIX : <urn:example:>
|
||||
@ -762,24 +784,24 @@ Because RDF doesn’t distinguish between properties and edges but just uses pre
|
||||
?usa :name "United States". # SPARQL
|
||||
```
|
||||
|
||||
SPARQL is supported by Amazon Neptune, AllegroGraph, Blazegraph, OpenLink Virtuoso, Apache Jena, and various other triple stores [[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Besta2019)].
|
||||
SPARQL is supported by Amazon Neptune, AllegroGraph, Blazegraph, OpenLink Virtuoso, Apache Jena, and various other triple stores [[35](ch03.html#Besta2019)].
|
||||
|
||||
|
||||
--------
|
||||
|
||||
### Datalog:遞迴關係查詢
|
||||
|
||||
Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s [[56](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Green2013), [57](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Ceri1989), [58](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Abiteboul1995)]. It is less well known among software engineers and not widely supported in mainstream databases, but it ought to be better-known since it is a very expressive language that is particularly powerful for complex queries. Several niche databases, including Datomic, LogicBlox, CozoDB, and LinkedIn’s LIquid [[59](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Meyer2020)] use Datalog as their query language.
|
||||
Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s [[56](ch03.html#Green2013), [57](ch03.html#Ceri1989), [58](ch03.html#Abiteboul1995)]. It is less well known among software engineers and not widely supported in mainstream databases, but it ought to be better-known since it is a very expressive language that is particularly powerful for complex queries. Several niche databases, including Datomic, LogicBlox, CozoDB, and LinkedIn’s LIquid [[59](ch03.html#Meyer2020)] use Datalog as their query language.
|
||||
|
||||
Datalog is actually based on a relational data model, not a graph, but it appears in the graph databases section of this book because recursive queries on graphs are a particular strength of Datalog.
|
||||
|
||||
The contents of a Datalog database consists of *facts*, and each fact corresponds to a row in a relational table. For example, say we have a table *location* containing locations, and it has three columns: *ID*, *name*, and *type*. The fact that the US is a country could then be written as `location(2, "United States", "country")`, where `2` is the ID of the US. In general, the statement `table(val1, val2, …)` means that `table` contains a row where the first column contains `val1`, the second column contains `val2`, and so on.
|
||||
|
||||
[Example 3-11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_triples) shows how to write the data from the left-hand side of [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph) in Datalog. The edges of the graph (`within`, `born_in`, and `lives_in`) are represented as two-column join tables. For example, Lucy has the ID 100 and Idaho has the ID 3, so the relationship “Lucy was born in Idaho” is represented as `born_in(100, 3)`.
|
||||
[Example 3-11](ch03.html#fig_datalog_triples) shows how to write the data from the left-hand side of [Figure 3-6](ch03.html#fig_datamodels_graph) in Datalog. The edges of the graph (`within`, `born_in`, and `lives_in`) are represented as two-column join tables. For example, Lucy has the ID 100 and Idaho has the ID 3, so the relationship “Lucy was born in Idaho” is represented as `born_in(100, 3)`.
|
||||
|
||||
> Example 3-11. A subset of the data in [Figure 3-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datamodels_graph), represented as Datalog facts
|
||||
> Example 3-11. A subset of the data in [Figure 3-6](ch03.html#fig_datamodels_graph), represented as Datalog facts
|
||||
|
||||
```
|
||||
```cypher
|
||||
location(1, "North America", "continent").
|
||||
location(2, "United States", "country").
|
||||
location(3, "Idaho", "state").
|
||||
@ -791,9 +813,9 @@ person(100, "Lucy").
|
||||
born_in(100, 3). /* Lucy was born in Idaho */
|
||||
```
|
||||
|
||||
Now that we have defined the data, we can write the same query as before, as shown in [Example 3-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_query). It looks a bit different from the equivalent in Cypher or SPARQL, but don’t let that put you off. Datalog is a subset of Prolog, a programming language that you might have seen before if you’ve studied computer science.
|
||||
Now that we have defined the data, we can write the same query as before, as shown in [Example 3-12](ch03.html#fig_datalog_query). It looks a bit different from the equivalent in Cypher or SPARQL, but don’t let that put you off. Datalog is a subset of Prolog, a programming language that you might have seen before if you’ve studied computer science.
|
||||
|
||||
> Example 3-12. The same query as [Example 3-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_cypher_query), expressed in Datalog
|
||||
> Example 3-12. The same query as [Example 3-5](ch03.html#fig_cypher_query), expressed in Datalog
|
||||
|
||||
```
|
||||
within_recursive(LocID, PlaceName) :- location(LocID, PlaceName, _). /* Rule 1 */
|
||||
@ -813,11 +835,11 @@ us_to_europe(Person) :- migrated(Person, "United States", "Europe"). /* Rule 4 *
|
||||
|
||||
Cypher and SPARQL jump in right away with `SELECT`, but Datalog takes a small step at a time. We define *rules* that derive new virtual tables from the underlying facts. These derived tables are like (virtual) SQL views: they are not stored in the database, but you can query them in the same way as a table containing stored facts.
|
||||
|
||||
In [Example 3-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_query) we define three derived tables: `within_recursive`, `migrated`, and `us_to_europe`. The name and columns of the virtual tables are defined by what appears before the `:-` symbol of each rule. For example, `migrated(PName, BornIn, LivingIn)` is a virtual table with three columns: the name of a person, the name of the place where they were born, and the name of the place where they are living.
|
||||
In [Example 3-12](ch03.html#fig_datalog_query) we define three derived tables: `within_recursive`, `migrated`, and `us_to_europe`. The name and columns of the virtual tables are defined by what appears before the `:-` symbol of each rule. For example, `migrated(PName, BornIn, LivingIn)` is a virtual table with three columns: the name of a person, the name of the place where they were born, and the name of the place where they are living.
|
||||
|
||||
The content of a virtual table is defined by the part of the rule after the `:-` symbol, where we try to find rows that match a certain pattern in the tables. For example, `person(PersonID, PName)` matches the row `person(100, "Lucy")`, with the variable `PersonID` bound to the value `100` and the variable `PName` bound to the value `"Lucy"`. A rule applies if the system can find a match for *all* patterns on the righthand side of the `:-` operator. When the rule applies, it’s as though the lefthand side of the `:-` was added to the database (with variables replaced by the values they matched).
|
||||
|
||||
One possible way of applying the rules is thus (and as illustrated in [Figure 3-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_naive)):
|
||||
One possible way of applying the rules is thus (and as illustrated in [Figure 3-7](ch03.html#fig_datalog_naive)):
|
||||
|
||||
1. `location(1, "North America", "continent")` exists in the database, so rule 1 applies. It generates `within_recursive(1, "North America")`.
|
||||
2. `within(2, 1)` exists in the database and the previous step generated `within_recursive(1, "North America")`, so rule 2 applies. It generates `within_recursive(2, "North America")`.
|
||||
@ -827,11 +849,11 @@ By repeated application of rules 1 and 2, the `within_recursive` virtual table c
|
||||
|
||||

|
||||
|
||||
> Figure 3-7. Determining that Idaho is in North America, using the Datalog rules from [Example 3-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_query).
|
||||
> Figure 3-7. Determining that Idaho is in North America, using the Datalog rules from [Example 3-12](ch03.html#fig_datalog_query).
|
||||
|
||||
Now rule 3 can find people who were born in some location `BornIn` and live in some location `LivingIn`. Rule 4 invokes rule 3 with `BornIn = 'United States'` and `LivingIn = 'Europe'`, and returns only the names of the people who match the search. By querying the contents of the virtual `us_to_europe` table, the Datalog system finally gets the same answer as in the earlier Cypher and SPARQL queries.
|
||||
|
||||
The Datalog approach requires a different kind of thinking compared to the other query languages discussed in this chapter. It allows complex queries to be built up rule by rule, with one rule referring to other rules, similarly to the way that you break down code into functions that call each other. Just like functions can be recursive, Datalog rules can also invoke themselves, like rule 2 in [Example 3-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_datalog_query), which enables graph traversals in Datalog queries.
|
||||
The Datalog approach requires a different kind of thinking compared to the other query languages discussed in this chapter. It allows complex queries to be built up rule by rule, with one rule referring to other rules, similarly to the way that you break down code into functions that call each other. Just like functions can be recursive, Datalog rules can also invoke themselves, like rule 2 in [Example 3-12](ch03.html#fig_datalog_query), which enables graph traversals in Datalog queries.
|
||||
|
||||
--------
|
||||
|
||||
@ -839,9 +861,9 @@ The Datalog approach requires a different kind of thinking compared to the other
|
||||
|
||||
GraphQL is a query language that, by design, is much more restrictive than the other query languages we have seen in this chapter. The purpose of GraphQL is to allow client software running on a user’s device (such as a mobile app or a JavaScript web app frontend) to request a JSON document with a particular structure, containing the fields necessary for rendering its user interface. GraphQL interfaces allow developers to rapidly change queries in client code without changing server-side APIs.
|
||||
|
||||
GraphQL’s flexibility comes at a cost. Organizations that adopt GraphQL often need tooling to convert GraphQL queries into requests to internal services, which often use REST or gRPC (see [Link to Come]). Authorization, rate limiting, and performance challenges are additional concerns [[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bessey2024)]. GraphQL’s query language is also limited since GraphQL come from an untrusted source. The language does not allow anything that could be expensive to execute, since otherwise users could perform denial-of-service attacks on a server by running lots of expensive queries. In particular, GraphQL does not allow recursive queries (unlike Cypher, SPARQL, SQL, or Datalog), and it does not allow arbitrary search conditions such as “find people who were born in the US and are now living in Europe” (unless the service owners specifically choose to offer such search functionality).
|
||||
GraphQL’s flexibility comes at a cost. Organizations that adopt GraphQL often need tooling to convert GraphQL queries into requests to internal services, which often use REST or gRPC (see [Link to Come]). Authorization, rate limiting, and performance challenges are additional concerns [[60](ch03.html#Bessey2024)]. GraphQL’s query language is also limited since GraphQL come from an untrusted source. The language does not allow anything that could be expensive to execute, since otherwise users could perform denial-of-service attacks on a server by running lots of expensive queries. In particular, GraphQL does not allow recursive queries (unlike Cypher, SPARQL, SQL, or Datalog), and it does not allow arbitrary search conditions such as “find people who were born in the US and are now living in Europe” (unless the service owners specifically choose to offer such search functionality).
|
||||
|
||||
Nevertheless, GraphQL is useful. [Example 3-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_query) shows how you might implement a group chat application such as Discord or Slack using GraphQL. The query requests all the channels that the user has access to, including the channel name and the 50 most recent messages in each channel. For each message it requests the timestamp, the message content, and the name and profile picture URL for the sender of the message. Moreover, if a message is a reply to another message, the query also requests the sender name and the content of the message it is replying to (which might be rendered in a smaller font above the reply, in order to provide some context).
|
||||
Nevertheless, GraphQL is useful. [Example 3-13](ch03.html#fig_graphql_query) shows how you might implement a group chat application such as Discord or Slack using GraphQL. The query requests all the channels that the user has access to, including the channel name and the 50 most recent messages in each channel. For each message it requests the timestamp, the message content, and the name and profile picture URL for the sender of the message. Moreover, if a message is a reply to another message, the query also requests the sender name and the content of the message it is replying to (which might be rendered in a smaller font above the reply, in order to provide some context).
|
||||
|
||||
> Example 3-13. Example GraphQL query for a group chat application
|
||||
|
||||
@ -867,9 +889,9 @@ query ChatApp {
|
||||
}
|
||||
```
|
||||
|
||||
[Example 3-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_response) shows what a response to the query in [Example 3-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_query) might look like. The response is a JSON document that mirrors the structure of the query: it contains exactly those attributes that were requested, no more and no less. This approach has the advantage that the server does not need to know which attributes the client requires in order to render the user interface; instead, the client can simply request what it needs. For example, this query does not request a profile picture URL for the sender of the `replyTo` message, but if the user interface were changed to add that profile picture, it would be easy for the client to add the required `imageUrl` attribute to the query without changing the server.
|
||||
[Example 3-14](ch03.html#fig_graphql_response) shows what a response to the query in [Example 3-13](ch03.html#fig_graphql_query) might look like. The response is a JSON document that mirrors the structure of the query: it contains exactly those attributes that were requested, no more and no less. This approach has the advantage that the server does not need to know which attributes the client requires in order to render the user interface; instead, the client can simply request what it needs. For example, this query does not request a profile picture URL for the sender of the `replyTo` message, but if the user interface were changed to add that profile picture, it would be easy for the client to add the required `imageUrl` attribute to the query without changing the server.
|
||||
|
||||
> Example 3-14. A possible response to the query in [Example 3-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_query)
|
||||
> Example 3-14. A possible response to the query in [Example 3-13](ch03.html#fig_graphql_query)
|
||||
|
||||
```
|
||||
{
|
||||
@ -896,9 +918,9 @@ query ChatApp {
|
||||
...
|
||||
```
|
||||
|
||||
In [Example 3-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_response) the name and image URL of a message sender is embedded directly in the message object. If the same user sends multiple messages, this information is repeated on each message. In principle, it would be possible to reduce this duplication, but GraphQL makes the design choice to accept a larger response size in order to make it simpler to render the user interface based on the data.
|
||||
In [Example 3-14](ch03.html#fig_graphql_response) the name and image URL of a message sender is embedded directly in the message object. If the same user sends multiple messages, this information is repeated on each message. In principle, it would be possible to reduce this duplication, but GraphQL makes the design choice to accept a larger response size in order to make it simpler to render the user interface based on the data.
|
||||
|
||||
The `replyTo` field is similar: in [Example 3-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_graphql_response), the second message is a reply to the first, and the content (“Hey!…”) and sender Aaliyah are duplicated under `replyTo`. It would be possible to instead return the ID of the message being replied to, but then the client would have to make an additional request to the server if that ID is not among the 50 most recent messages returned. Duplicating the content makes it much simpler to work with the data.
|
||||
The `replyTo` field is similar: in [Example 3-14](ch03.html#fig_graphql_response), the second message is a reply to the first, and the content (“Hey!…”) and sender Aaliyah are duplicated under `replyTo`. It would be possible to instead return the ID of the message being replied to, but then the client would have to make an additional request to the server if that ID is not among the 50 most recent messages returned. Duplicating the content makes it much simpler to work with the data.
|
||||
|
||||
The server’s database can store the data in a more normalized form, and perform the necessary joins to process a query. For example, the server might store a message along with the user ID of the sender and the ID of the message it is replying to; when it receives a query like the one above, the server would then resolve those IDs to find the records they refer to. However, the client can only ask the server to perform joins that are explicitly offered in the GraphQL schema.
|
||||
|
||||
@ -919,25 +941,25 @@ Even though the response to a GraphQL query looks similar to a response from a d
|
||||
|
||||
In all the data models we have discussed so far, the data is queried in the same form as it is written—be it JSON documents, rows in tables, or vertices and edges in a graph. However, in complex applications it can sometimes be difficult to find a single data representation that is able to satisfy all the different ways that the data needs to be queried and presented. In such situations, it can be beneficial to write data in one form, and then to derive from it several representations that are optimized for different types of reads.
|
||||
|
||||
We previously saw this idea in [“Systems of Record and Derived Data”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_derived), and ETL (see [“Data Warehousing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_dwh)) is one example of such a derivation process. Now we will take the idea further. If we are going to derive one data representation from another anyway, we can choose different representations that are optimized for writing and for reading, respectively. How would you model your data if you only wanted to optimize it for writing, and if efficient queries were of no concern?
|
||||
We previously saw this idea in [“Systems of Record and Derived Data”](ch01.html#sec_introduction_derived), and ETL (see [“Data Warehousing”](ch01.html#sec_introduction_dwh)) is one example of such a derivation process. Now we will take the idea further. If we are going to derive one data representation from another anyway, we can choose different representations that are optimized for writing and for reading, respectively. How would you model your data if you only wanted to optimize it for writing, and if efficient queries were of no concern?
|
||||
|
||||
Perhaps the simplest, fastest, and most expressive way of writing data is an *event log*: every time you want to write some data, you encode it as a self-contained string (perhaps as JSON), including a timestamp, and then append it to a sequence of events. Events in this log are *immutable*: you never change or delete them, you only ever append more events to the log (which may supersede earlier events). An event can contain arbitrary properties.
|
||||
|
||||
[Figure 3-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_event_sourcing) shows an example that could be taken from a conference management system. A conference can be a complex business domain: not only can individual attendees register and pay by card, but companies can also order seats in bulk, pay by invoice, and then later assign the seats to individual people. Some number of seats may be reserved for speakers, sponsors, volunteer helpers, and so on. Reservations may also be cancelled, and meanwhile, the conference organizer might change the capacity of the event by moving it to a different room. With all of this going on, simply calculating the number of available seats becomes a challenging query.
|
||||
[Figure 3-8](ch03.html#fig_event_sourcing) shows an example that could be taken from a conference management system. A conference can be a complex business domain: not only can individual attendees register and pay by card, but companies can also order seats in bulk, pay by invoice, and then later assign the seats to individual people. Some number of seats may be reserved for speakers, sponsors, volunteer helpers, and so on. Reservations may also be cancelled, and meanwhile, the conference organizer might change the capacity of the event by moving it to a different room. With all of this going on, simply calculating the number of available seats becomes a challenging query.
|
||||
|
||||

|
||||
|
||||
> Figure 3-8. Using a log of immutable events as source of truth, and deriving materialized views from it.
|
||||
|
||||
In [Figure 3-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_event_sourcing), every change to the state of the conference (such as the organizer opening registrations, or attendees making and cancelling registrations) is first stored as an event. Whenever an event is appended to the log, several *materialized views* (also known as *projections* or *read models*) are also updated to reflect the effect of that event. In the conference example, there might be one materialized view that collects all information related to the status of each booking, another that computes charts for the conference organizer’s dashboard, and a third that generates files for the printer that produces the attendees’ badges.
|
||||
In [Figure 3-8](ch03.html#fig_event_sourcing), every change to the state of the conference (such as the organizer opening registrations, or attendees making and cancelling registrations) is first stored as an event. Whenever an event is appended to the log, several *materialized views* (also known as *projections* or *read models*) are also updated to reflect the effect of that event. In the conference example, there might be one materialized view that collects all information related to the status of each booking, another that computes charts for the conference organizer’s dashboard, and a third that generates files for the printer that produces the attendees’ badges.
|
||||
|
||||
The idea of using events as the source of truth, and expressing every state change as an event, is known as *event sourcing* [[61](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Betts2012), [62](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Young2014)]. The principle of maintaining separate read-optimized representations and deriving them from the write-optimized representation is called *command query responsibility segregation (CQRS)* [[63](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Young2010)]. These terms originated in the domain-driven design (DDD) community, although similar ideas have been around for a long time, for example in *state machine replication* (see [Link to Come]).
|
||||
The idea of using events as the source of truth, and expressing every state change as an event, is known as *event sourcing* [[61](ch03.html#Betts2012), [62](ch03.html#Young2014)]. The principle of maintaining separate read-optimized representations and deriving them from the write-optimized representation is called *command query responsibility segregation (CQRS)* [[63](ch03.html#Young2010)]. These terms originated in the domain-driven design (DDD) community, although similar ideas have been around for a long time, for example in *state machine replication* (see [Link to Come]).
|
||||
|
||||
When a request from a user comes in, it is called a *command*, and it first needs to be validated. Only once the command has been executed and it has been determined to be valid (e.g., there were enough available seats for a requested reservation), it becomes a fact, and the corresponding event is added to the log. Consequently, the event log should contain only valid events, and a consumer of the event log that builds a materialized view is not allowed to reject an event.
|
||||
|
||||
When modelling your data in an event sourcing style, it is recommended that you name your events in the past tense (e.g., “the seats were booked”), because an event is a record of the fact that something has happened in the past. Even if the user later decides to change or cancel, the fact remains true that they formerly held a booking, and the change or cancellation is a separate event that is added later.
|
||||
|
||||
A similarity between event sourcing and a star schema fact table, as discussed in [“Stars and Snowflakes: Schemas for Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics), is that both are collections of events that happened in the past. However, rows in a fact table all have the same set of columns, wheras in event sourcing there may be many different event types, each with different properties. Moreover, a fact table is an unordered collection, while in event sourcing the order of events is important: if a booking is first made and then cancelled, processing those events in the wrong order would not make sense.
|
||||
A similarity between event sourcing and a star schema fact table, as discussed in [“Stars and Snowflakes: Schemas for Analytics”](ch03.html#sec_datamodels_analytics), is that both are collections of events that happened in the past. However, rows in a fact table all have the same set of columns, wheras in event sourcing there may be many different event types, each with different properties. Moreover, a fact table is an unordered collection, while in event sourcing the order of events is important: if a booking is first made and then cancelled, processing those events in the wrong order would not make sense.
|
||||
|
||||
Event sourcing and CQRS have several advantages:
|
||||
|
||||
@ -945,7 +967,7 @@ Event sourcing and CQRS have several advantages:
|
||||
- A key principle of event sourcing is that the materialized views are derived from the event log in a reproducible way: you should always be able to delete the materialized views and recompute them by processing the same events in the same order, using the same code. If there was a bug in the view maintenance code, you can just delete the view and recompute it with the new code. It’s also easier to find the bug because you can re-run the view maintenance code as often as you like and inspect its behavior.
|
||||
- You can have multiple materialized views that are optimized for the particular queries that your application requires. They can be stored either in the same database as the events or a different one, depending on your needs. They can use any data model, and they can be denormalized for fast reads. You can even keep a view only in memory and avoid persisting it, as long as it’s okay to recompute the view from the event log whenever the service restarts.
|
||||
- If you decide you want to present the existing information in a new way, it is easy to build a new materialized view from the existing event log. You can also evolve the system to support new features by adding new types of events, or new properties to existing event types (any older events remain unmodified). You can also chain new behaviors off existing events (for example, when a conference attendee cancels, their seat could be offered to the next person on the waiting list).
|
||||
- If an event was written in error you can delete it again, and then you can rebuild the views without the deleted event. On the other hand, in a database where you update and delete data directly, a committed transaction is often difficult to reverse. Event sourcing can therefore reduce the number of irreversible actions in the system, making it easier to change (see [“Evolvability: Making Change Easy”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_evolvability)).
|
||||
- If an event was written in error you can delete it again, and then you can rebuild the views without the deleted event. On the other hand, in a database where you update and delete data directly, a committed transaction is often difficult to reverse. Event sourcing can therefore reduce the number of irreversible actions in the system, making it easier to change (see [“Evolvability: Making Change Easy”](ch02.html#sec_introduction_evolvability)).
|
||||
- The event log can also serve as an audit log of everything that happened in the system, which is valuable in regulated industries that require such auditability.
|
||||
|
||||
However, event sourcing and CQRS also have downsides:
|
||||
@ -965,7 +987,7 @@ The only important requirement is that the event storage system must guarantee t
|
||||
|
||||
## 資料框、矩陣和陣列
|
||||
|
||||
本章迄今為止我們看到的資料模型通常用於事務處理和分析目的(見[“事務處理與分析對比”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_analytics))。還有一些資料模型,你可能在分析或科學上下文中遇到,但它們很少出現在OLTP系統中:資料框和數字的多維陣列,如矩陣。
|
||||
本章迄今為止我們看到的資料模型通常用於事務處理和分析目的(見[“事務處理與分析對比”](ch01.html#sec_introduction_analytics))。還有一些資料模型,你可能在分析或科學上下文中遇到,但它們很少出現在OLTP系統中:資料框和數字的多維陣列,如矩陣。
|
||||
|
||||
資料框是R語言、Python的Pandas庫、Apache Spark、ArcticDB、Dask等系統支援的資料模型。它們是資料科學家準備訓練機器學習模型的資料時常用的工具,但也廣泛用於資料探索、統計資料分析、資料視覺化及類似目的。
|
||||
|
||||
@ -973,11 +995,11 @@ The only important requirement is that the event storage system must guarantee t
|
||||
|
||||
資料框通常不是透過像SQL這樣的宣告性查詢操作,而是透過一系列修改其結構和內容的命令進行操縱。這符合資料科學家的典型工作流程,他們逐步“整理”資料,使其能夠找到他們正在詢問的問題的答案。這些操作通常發生在資料科學家的私有資料集副本上,通常在他們的本地機器上,儘管最終結果可能與其他使用者共享。
|
||||
|
||||
資料框API還提供了遠超關係資料庫所提供的各種操作,而且資料模型的使用方式通常與典型的關係資料建模非常不同 [[64](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Petersohn2020)]。例如,資料框的一個常見用途是將資料從類似關係的表示轉換為矩陣或多維陣列表示,這是許多機器學習演算法所期望的輸入形式。
|
||||
資料框API還提供了遠超關係資料庫所提供的各種操作,而且資料模型的使用方式通常與典型的關係資料建模非常不同 [[64](ch03.html#Petersohn2020)]。例如,資料框的一個常見用途是將資料從類似關係的表示轉換為矩陣或多維陣列表示,這是許多機器學習演算法所期望的輸入形式。
|
||||
|
||||
一個這樣的轉換的簡單示例顯示在[圖3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix)中。左邊是一個關係表,顯示不同使用者對各種電影的評分(在1到5的範圍內),右邊的資料被轉換成一個矩陣,每一列是一部電影,每一行是一個使用者(類似於電子表格中的*資料透視表*)。該矩陣是*稀疏的*,這意味著許多使用者-電影組合沒有資料,但這是可以的。這個矩陣可能有成千上萬的列,因此不適合在關係資料庫中儲存,但資料框和提供稀疏陣列的庫(如Python的NumPy)可以輕鬆處理這種資料
|
||||
一個這樣的轉換的簡單示例顯示在[圖3-9](ch03.html#fig_dataframe_to_matrix)中。左邊是一個關係表,顯示不同使用者對各種電影的評分(在1到5的範圍內),右邊的資料被轉換成一個矩陣,每一列是一部電影,每一行是一個使用者(類似於電子表格中的*資料透視表*)。該矩陣是*稀疏的*,這意味著許多使用者-電影組合沒有資料,但這是可以的。這個矩陣可能有成千上萬的列,因此不適合在關係資料庫中儲存,但資料框和提供稀疏陣列的庫(如Python的NumPy)可以輕鬆處理這種資料
|
||||
|
||||
The data models we have seen so far in this chapter are generally used for both transaction processing and analytics purposes (see [“Transaction Processing versus Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_analytics)). There are also some data models that you are likely to encounter in an analytical or scientific context, but that rarely feature in OLTP systems: dataframes and multidimensional arrays of numbers such as matrices.
|
||||
The data models we have seen so far in this chapter are generally used for both transaction processing and analytics purposes (see [“Transaction Processing versus Analytics”](ch01.html#sec_introduction_analytics)). There are also some data models that you are likely to encounter in an analytical or scientific context, but that rarely feature in OLTP systems: dataframes and multidimensional arrays of numbers such as matrices.
|
||||
|
||||
Dataframes are a data model supported by the R language, the Pandas library for Python, Apache Spark, ArcticDB, Dask, and other systems. They are a popular tool for data scientists preparing data for training machine learning models, but they are also widely used for data exploration, statistical data analysis, data visualization, and similar purposes.
|
||||
|
||||
@ -985,32 +1007,31 @@ At first glance, a dataframe is similar to a table in a relational database or a
|
||||
|
||||
Instead of a declarative query such as SQL, a dataframe is typically manipulated through a series of commands that modify its structure and content. This matches the typical workflow of data scientists, who incrementally “wrangle” the data into a form that allows them to find answers to the questions they are asking. These manipulations usually take place on the data scientist’s private copy of the dataset, often on their local machine, although the end result may be shared with other users.
|
||||
|
||||
Dataframe APIs also offer a wide variety of operations that go far beyond what relational databases offer, and the data model is often used in ways that are very different from typical relational data modelling [[64](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Petersohn2020)]. For example, a common use of dataframes is to transform data from a relational-like representation into a matrix or multidimensional array representation, which is the form that many machine learning algorithms expect of their input.
|
||||
Dataframe APIs also offer a wide variety of operations that go far beyond what relational databases offer, and the data model is often used in ways that are very different from typical relational data modelling [[64](ch03.html#Petersohn2020)]. For example, a common use of dataframes is to transform data from a relational-like representation into a matrix or multidimensional array representation, which is the form that many machine learning algorithms expect of their input.
|
||||
|
||||
A simple example of such a transformation is shown in [Figure 3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix). On the left we have a relational table of how different users have rated various movies (on a scale of 1 to 5), and on the right the data has been transformed into a matrix where each column is a movie and each row is a user (similarly to a *pivot table* in a spreadsheet). The matrix is *sparse*, which means there is no data for many user-movie combinations, but this is fine. This matrix may have many thousands of columns and would therefore not fit well in a relational database, but dataframes and libraries that offer sparse arrays (such as NumPy for Python) can handle such data easily.
|
||||
A simple example of such a transformation is shown in [Figure 3-9](ch03.html#fig_dataframe_to_matrix). On the left we have a relational table of how different users have rated various movies (on a scale of 1 to 5), and on the right the data has been transformed into a matrix where each column is a movie and each row is a user (similarly to a *pivot table* in a spreadsheet). The matrix is *sparse*, which means there is no data for many user-movie combinations, but this is fine. This matrix may have many thousands of columns and would therefore not fit well in a relational database, but dataframes and libraries that offer sparse arrays (such as NumPy for Python) can handle such data easily.
|
||||
|
||||

|
||||
|
||||
> 圖3-9 將電影評級的關係資料庫轉換為矩陣表示。
|
||||
|
||||
|
||||
矩陣只能包含數字,各種技術被用來將非數字資料轉換為矩陣中的數字。例如:
|
||||
|
||||
- 日期(在[圖3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix)中的示例矩陣中被省略)可以縮放為某個適當範圍內的浮點數。
|
||||
- 日期(在[圖3-9](ch03.html#fig_dataframe_to_matrix)中的示例矩陣中被省略)可以縮放為某個適當範圍內的浮點數。
|
||||
- 對於只能取固定小範圍值的列(例如,電影資料庫中電影的型別),通常使用*獨熱編碼*:我們為每個可能的值建立一列(一列是“喜劇”,一列是“戲劇”,一列是“恐怖”等),並在代表電影的每一行中,在與該電影型別對應的列中放置1,在所有其他列中放置0。這種表示也很容易泛化到適用於多種型別的電影。
|
||||
|
||||
一旦資料以數字矩陣的形式存在,就可以進行線性代數操作,這是許多機器學習演算法的基礎。例如,[圖3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix)中的資料可以是一個推薦系統的一部分,該系統可能會推薦使用者可能喜歡的電影。資料框足夠靈活,可以讓資料從關係形式逐漸演變為矩陣表示,同時讓資料科學家控制最適合實現資料分析或模型訓練過程目標的表示。
|
||||
一旦資料以數字矩陣的形式存在,就可以進行線性代數操作,這是許多機器學習演算法的基礎。例如,[圖3-9](ch03.html#fig_dataframe_to_matrix)中的資料可以是一個推薦系統的一部分,該系統可能會推薦使用者可能喜歡的電影。資料框足夠靈活,可以讓資料從關係形式逐漸演變為矩陣表示,同時讓資料科學家控制最適合實現資料分析或模型訓練過程目標的表示。
|
||||
|
||||
還有一些資料庫,如TileDB [[65](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Papadopoulos2016)],專門用於儲存大量的多維數字陣列;它們被稱為*陣列資料庫*,最常用於儲存科學資料集,如地理空間測量(在規則間隔的網格上的柵格資料)、醫學成像或天文望遠鏡的觀測 [[66](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Rusu2022)]。資料框也在金融行業中用於表示*時間序列資料*,如資產價格和隨時間的交易 [[67](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Targett2023)]。
|
||||
還有一些資料庫,如TileDB [[65](ch03.html#Papadopoulos2016)],專門用於儲存大量的多維數字陣列;它們被稱為*陣列資料庫*,最常用於儲存科學資料集,如地理空間測量(在規則間隔的網格上的柵格資料)、醫學成像或天文望遠鏡的觀測 [[66](ch03.html#Rusu2022)]。資料框也在金融行業中用於表示*時間序列資料*,如資產價格和隨時間的交易 [[67](ch03.html#Targett2023)]。
|
||||
|
||||
A matrix can only contain numbers, and various techniques are used to transform non-numerical data into numbers in the matrix. For example:
|
||||
|
||||
- Dates (which are omitted from the example matrix in [Figure 3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix)) could be scaled to be floating-point numbers within some suitable range.
|
||||
- Dates (which are omitted from the example matrix in [Figure 3-9](ch03.html#fig_dataframe_to_matrix)) could be scaled to be floating-point numbers within some suitable range.
|
||||
- For columns that can only take one of a small, fixed set of values (for example, the genre of a movie in a database of movies), a *one-hot encoding* is often used: we create a column for each possible value (one for “comedy”, one for “drama”, one for “horror”, etc.), and for each row representing a movie, we put a 1 in the column corresponding to the genre of that movie, and a 0 in all the other columns. This representation also easily generalizes to movies that fit within several genres.
|
||||
|
||||
Once the data is in the form of a matrix of numbers, it is amenable to linear algebra operations, which form the basis of many machine learning algorithms. For example, the data in [Figure 3-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#fig_dataframe_to_matrix) could be a part of a system for recommending movies that the user may like. Dataframes are flexible enough to allow data to be gradually evolved from a relational form into a matrix representation, while giving the data scientist control over the representation that is most suitable for achieving the goals of the data analysis or model training process.
|
||||
Once the data is in the form of a matrix of numbers, it is amenable to linear algebra operations, which form the basis of many machine learning algorithms. For example, the data in [Figure 3-9](ch03.html#fig_dataframe_to_matrix) could be a part of a system for recommending movies that the user may like. Dataframes are flexible enough to allow data to be gradually evolved from a relational form into a matrix representation, while giving the data scientist control over the representation that is most suitable for achieving the goals of the data analysis or model training process.
|
||||
|
||||
There are also databases such as TileDB [[65](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Papadopoulos2016)] that specialize in storing large multidimensional arrays of numbers; they are called *array databases* and are most commonly used for scientific datasets such as geospatial measurements (raster data on a regularly spaced grid), medical imaging, or observations from astronomical telescopes [[66](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Rusu2022)]. Dataframes are also used in the financial industry for representing *time series data*, such as the prices of assets and trades over time [[67](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Targett2023)].
|
||||
There are also databases such as TileDB [[65](ch03.html#Papadopoulos2016)] that specialize in storing large multidimensional arrays of numbers; they are called *array databases* and are most commonly used for scientific datasets such as geospatial measurements (raster data on a regularly spaced grid), medical imaging, or observations from astronomical telescopes [[66](ch03.html#Rusu2022)]. Dataframes are also used in the financial industry for representing *time series data*, such as the prices of assets and trades over time [[67](ch03.html#Targett2023)].
|
||||
|
||||
|
||||
|
||||
@ -1018,6 +1039,30 @@ There are also databases such as TileDB [[65](https://learning.oreilly.com/libra
|
||||
|
||||
## 本章小結
|
||||
|
||||
資料模型是一個龐大的主題,在本章中,我們快速瀏覽了各種不同的模型。我們沒有空間深入每個模型的所有細節,但希望這個概覽足以激發您的興趣,進一步瞭解最適合您應用需求的模型。
|
||||
|
||||
*關係模型*,儘管已有半個世紀之久,仍然是許多應用程式的重要資料模型——特別是在資料倉庫和商業分析中,關係星型或雪花型架構和SQL查詢無處不在。然而,在其他領域,幾種替代關係資料的模型也變得流行:
|
||||
|
||||
- *文件模型* 針對資料以自包含的 JSON 文件形式出現,且文件之間的關係罕見的用例。
|
||||
- *圖資料模型* 則走向相反方向,針對任何事物都可能與一切相關的用例,查詢可能需要跨多個跳點尋找感興趣的資料(這可以透過在 Cypher、SPARQL 或 Datalog 中使用遞迴查詢來表達)。
|
||||
- *資料框* 將關係資料概括為大量的列,從而在資料庫和構成大部分機器學習、統計資料分析和科學計算基礎的多維陣列之間架起了一座橋樑。
|
||||
|
||||
在某種程度上,一個模型可以用另一個模型來模擬——例如,圖資料可以在關係資料庫中表示——但結果可能會很笨拙,正如我們在 SQL 中對遞迴查詢的支援所見。
|
||||
|
||||
因此,為每種資料模型開發了各種專門的資料庫,提供針對特定模型最佳化的查詢語言和儲存引擎。然而,資料庫也趨向於透過新增對其他資料模型的支援來擴充套件到相鄰領域:例如,關係資料庫增加了對文件資料的支援,以 JSON 列的形式,文件資料庫增加了類似關係的連線,對 SQL 中圖資料的支援也在逐漸改進。
|
||||
|
||||
我們討論的另一個模型是*事件源*,它將資料表示為不可變事件的僅附加日誌,並且在建模複雜商業領域的活動時可能具有優勢。僅附加日誌對於寫入資料很有好處(我們將在[後續連結]中看到);為了支援高效查詢,事件日誌透過 CQRS 轉換為最佳化的物化檢視。
|
||||
|
||||
非關係資料模型的一個共同特點是,它們通常不強制對它們儲存的資料執行模式,這可以使應用程式適應變化的需求變得更加容易。然而,您的應用程式很可能仍然假設資料具有某種結構;這只是一個問題,模式是顯式的(在寫入時強制)還是隱式的(在讀取時假設)。
|
||||
|
||||
雖然我們已經覆蓋了很多內容,但仍有一些未提及的資料模型。僅舉幾個簡短的例子:
|
||||
|
||||
- 研究人員在處理基因組資料時,經常需要進行*序列相似性搜尋*,這意味著取一個非常長的字串(代表一個 DNA 分子)並將其與大量相似但不完全相同的字串資料庫進行匹配。這裡描述的任何資料庫都無法處理這種用途,這就是為什麼研究人員編寫了像 GenBank [[68](ch03.html#Benson2007)] 這樣的專門基因組資料庫軟體。
|
||||
- 許多金融系統使用帶有複式記賬的*分類賬*作為其資料模型。這種型別的資料可以在關係資料庫中表示,但也有如 TigerBeetle 這樣專門針對此資料模型的資料庫。加密貨幣和區塊鏈通常基於分散式賬本,這也將價值轉移內置於其資料模型中。
|
||||
- *全文搜尋* 可以說是一種經常與資料庫一起使用的資料模型。資訊檢索是一個大的專門主題,我們在本書中不會詳細討論,但我們將觸及搜尋索引和向量搜尋[後續連結]。
|
||||
|
||||
我們目前只能到這裡。在下一章中,我們將討論在*實現*本章描述的資料模型時涉及的一些權衡。
|
||||
|
||||
Data models are a huge subject, and in this chapter we have taken a quick look at a broad variety of different models. We didn’t have space to go into all the details of each model, but hopefully the overview has been enough to whet your appetite to find out more about the model that best fits your application’s requirements.
|
||||
|
||||
The *relational model*, despite being more than half a century old, remains an important data model for many applications—especially in data warehousing and business analytics, where relational star or snowflake schemas and SQL queries are ubiquitous. However, several alternatives to relational data have also become popular in other domains:
|
||||
@ -1036,7 +1081,7 @@ One thing that non-relational data models have in common is that they typically
|
||||
|
||||
Although we have covered a lot of ground, there are still data models left unmentioned. To give just a few brief examples:
|
||||
|
||||
- Researchers working with genome data often need to perform *sequence-similarity searches*, which means taking one very long string (representing a DNA molecule) and matching it against a large database of strings that are similar, but not identical. None of the databases described here can handle this kind of usage, which is why researchers have written specialized genome database software like GenBank [[68](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Benson2007)].
|
||||
- Researchers working with genome data often need to perform *sequence-similarity searches*, which means taking one very long string (representing a DNA molecule) and matching it against a large database of strings that are similar, but not identical. None of the databases described here can handle this kind of usage, which is why researchers have written specialized genome database software like GenBank [[68](ch03.html#Benson2007)].
|
||||
- Many financial systems use *ledgers* with double-entry accounting as their data model. This type of data can be represented in relational databases, but there are also databases such as TigerBeetle that specialize in this data model. Cryptocurrencies and blockchains are typically based on distributed ledgers, which also have value transfer built into their data model.
|
||||
- *Full-text search* is arguably a kind of data model that is frequently used alongside databases. Information retrieval is a large specialist subject that we won’t cover in great detail in this book, but we’ll touch on search indexes and vector search in [Link to Come].
|
||||
|
||||
@ -1047,141 +1092,141 @@ We have to leave it there for now. In the next chapter we will discuss some of t
|
||||
|
||||
## 參考文獻
|
||||
|
||||
[[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Brandon2024-marker)] Jamie Brandon. [Unexplanations: query optimization works because sql is declarative](https://www.scattered-thoughts.net/writing/unexplanations-sql-declarative/). *scattered-thoughts.net*, February 2024. Archived at [perma.cc/P6W2-WMFZ](https://perma.cc/P6W2-WMFZ)
|
||||
[[1](ch03.html#Brandon2024-marker)] Jamie Brandon. [Unexplanations: query optimization works because sql is declarative](https://www.scattered-thoughts.net/writing/unexplanations-sql-declarative/). *scattered-thoughts.net*, February 2024. Archived at [perma.cc/P6W2-WMFZ](https://perma.cc/P6W2-WMFZ)
|
||||
|
||||
[[2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Hellerstein2010-marker)] Joseph M. Hellerstein. [The Declarative Imperative: Experiences and Conjectures in Distributed Logic](http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-90.pdf). Tech report UCB/EECS-2010-90, Electrical Engineering and Computer Sciences, University of California at Berkeley, June 2010. Archived at [perma.cc/K56R-VVQM](https://perma.cc/K56R-VVQM)
|
||||
[[2](ch03.html#Hellerstein2010-marker)] Joseph M. Hellerstein. [The Declarative Imperative: Experiences and Conjectures in Distributed Logic](http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-90.pdf). Tech report UCB/EECS-2010-90, Electrical Engineering and Computer Sciences, University of California at Berkeley, June 2010. Archived at [perma.cc/K56R-VVQM](https://perma.cc/K56R-VVQM)
|
||||
|
||||
[[3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Codd1970-marker)] Edgar F. Codd. [A Relational Model of Data for Large Shared Data Banks](https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf). *Communications of the ACM*, volume 13, issue 6, pages 377–387, June 1970. [doi:10.1145/362384.362685](http://dx.doi.org/10.1145/362384.362685)
|
||||
[[3](ch03.html#Codd1970-marker)] Edgar F. Codd. [A Relational Model of Data for Large Shared Data Banks](https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf). *Communications of the ACM*, volume 13, issue 6, pages 377–387, June 1970. [doi:10.1145/362384.362685](http://dx.doi.org/10.1145/362384.362685)
|
||||
|
||||
[[4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Stonebraker2005around-marker)] Michael Stonebraker and Joseph M. Hellerstein. [What Goes Around Comes Around](http://mitpress2.mit.edu/books/chapters/0262693143chapm1.pdf). In *Readings in Database Systems*, 4th edition, MIT Press, pages 2–41, 2005. ISBN: 9780262693141
|
||||
[[4](ch03.html#Stonebraker2005around-marker)] Michael Stonebraker and Joseph M. Hellerstein. [What Goes Around Comes Around](http://mitpress2.mit.edu/books/chapters/0262693143chapm1.pdf). In *Readings in Database Systems*, 4th edition, MIT Press, pages 2–41, 2005. ISBN: 9780262693141
|
||||
|
||||
[[5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Winand2015-marker)] Markus Winand. [Modern SQL: Beyond Relational](https://modern-sql.com/). *modern-sql.com*, 2015. Archived at [perma.cc/D63V-WAPN](https://perma.cc/D63V-WAPN)
|
||||
[[5](ch03.html#Winand2015-marker)] Markus Winand. [Modern SQL: Beyond Relational](https://modern-sql.com/). *modern-sql.com*, 2015. Archived at [perma.cc/D63V-WAPN](https://perma.cc/D63V-WAPN)
|
||||
|
||||
[[6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Fowler2012-marker)] Martin Fowler. [OrmHate](https://martinfowler.com/bliki/OrmHate.html). *martinfowler.com*, May 2012. Archived at [perma.cc/VCM8-PKNG](https://perma.cc/VCM8-PKNG)
|
||||
[[6](ch03.html#Fowler2012-marker)] Martin Fowler. [OrmHate](https://martinfowler.com/bliki/OrmHate.html). *martinfowler.com*, May 2012. Archived at [perma.cc/VCM8-PKNG](https://perma.cc/VCM8-PKNG)
|
||||
|
||||
[[7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Mihalcea2023-marker)] Vlad Mihalcea. [N+1 query problem with JPA and Hibernate](https://vladmihalcea.com/n-plus-1-query-problem/). *vladmihalcea.com*, January 2023. Archived at [perma.cc/79EV-TZKB](https://perma.cc/79EV-TZKB)
|
||||
[[7](ch03.html#Mihalcea2023-marker)] Vlad Mihalcea. [N+1 query problem with JPA and Hibernate](https://vladmihalcea.com/n-plus-1-query-problem/). *vladmihalcea.com*, January 2023. Archived at [perma.cc/79EV-TZKB](https://perma.cc/79EV-TZKB)
|
||||
|
||||
[[8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Schauder2023-marker)] Jens Schauder. [This is the Beginning of the End of the N+1 Problem: Introducing Single Query Loading](https://spring.io/blog/2023/08/31/this-is-the-beginning-of-the-end-of-the-n-1-problem-introducing-single-query). *spring.io*, August 2023. Archived at [perma.cc/6V96-R333](https://perma.cc/6V96-R333)
|
||||
[[8](ch03.html#Schauder2023-marker)] Jens Schauder. [This is the Beginning of the End of the N+1 Problem: Introducing Single Query Loading](https://spring.io/blog/2023/08/31/this-is-the-beginning-of-the-end-of-the-n-1-problem-introducing-single-query). *spring.io*, August 2023. Archived at [perma.cc/6V96-R333](https://perma.cc/6V96-R333)
|
||||
|
||||
[[9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Zola2014-marker)] William Zola. [6 Rules of Thumb for MongoDB Schema Design](https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design). *mongodb.com*, June 2014. Archived at [perma.cc/T2BZ-PPJB](https://perma.cc/T2BZ-PPJB)
|
||||
[[9](ch03.html#Zola2014-marker)] William Zola. [6 Rules of Thumb for MongoDB Schema Design](https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design). *mongodb.com*, June 2014. Archived at [perma.cc/T2BZ-PPJB](https://perma.cc/T2BZ-PPJB)
|
||||
|
||||
[[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Andrews2023-marker)] Sidney Andrews and Christopher McClister. [Data modeling in Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data). *learn.microsoft.com*, February 2023. Archived at [archive.org](https://web.archive.org/web/20230207193233/https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data)
|
||||
[[10](ch03.html#Andrews2023-marker)] Sidney Andrews and Christopher McClister. [Data modeling in Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data). *learn.microsoft.com*, February 2023. Archived at [archive.org](https://web.archive.org/web/20230207193233/https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data)
|
||||
|
||||
[[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Krikorian2012_ch3-marker)] Raffi Krikorian. [Timelines at Scale](http://www.infoq.com/presentations/Twitter-Timeline-Scalability). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK)
|
||||
[[11](ch03.html#Krikorian2012_ch3-marker)] Raffi Krikorian. [Timelines at Scale](http://www.infoq.com/presentations/Twitter-Timeline-Scalability). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK)
|
||||
|
||||
[[12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Kimball2013_ch3-marker)] Ralph Kimball and Margy Ross. [*The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling*](https://learning.oreilly.com/library/view/the-data-warehouse/9781118530801/), 3rd edition. John Wiley & Sons, July 2013. ISBN: 9781118530801
|
||||
[[12](ch03.html#Kimball2013_ch3-marker)] Ralph Kimball and Margy Ross. [*The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling*](https://learning.oreilly.com/library/view/the-data-warehouse/9781118530801/), 3rd edition. John Wiley & Sons, July 2013. ISBN: 9781118530801
|
||||
|
||||
[[13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Kaminsky2022-marker)] Michael Kaminsky. [Data warehouse modeling: Star schema vs. OBT](https://www.fivetran.com/blog/star-schema-vs-obt). *fivetran.com*, August 2022. Archived at [perma.cc/2PZK-BFFP](https://perma.cc/2PZK-BFFP)
|
||||
[[13](ch03.html#Kaminsky2022-marker)] Michael Kaminsky. [Data warehouse modeling: Star schema vs. OBT](https://www.fivetran.com/blog/star-schema-vs-obt). *fivetran.com*, August 2022. Archived at [perma.cc/2PZK-BFFP](https://perma.cc/2PZK-BFFP)
|
||||
|
||||
[[14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Nelson2018-marker)] Joe Nelson. [User-defined Order in SQL](https://begriffs.com/posts/2018-03-20-user-defined-order.html). *begriffs.com*, March 2018. Archived at [perma.cc/GS3W-F7AD](https://perma.cc/GS3W-F7AD)
|
||||
[[14](ch03.html#Nelson2018-marker)] Joe Nelson. [User-defined Order in SQL](https://begriffs.com/posts/2018-03-20-user-defined-order.html). *begriffs.com*, March 2018. Archived at [perma.cc/GS3W-F7AD](https://perma.cc/GS3W-F7AD)
|
||||
|
||||
[[15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Wallace2017-marker)] Evan Wallace. [Realtime Editing of Ordered Sequences](https://www.figma.com/blog/realtime-editing-of-ordered-sequences/). *figma.com*, March 2017. Archived at [perma.cc/K6ER-CQZW](https://perma.cc/K6ER-CQZW)
|
||||
[[15](ch03.html#Wallace2017-marker)] Evan Wallace. [Realtime Editing of Ordered Sequences](https://www.figma.com/blog/realtime-editing-of-ordered-sequences/). *figma.com*, March 2017. Archived at [perma.cc/K6ER-CQZW](https://perma.cc/K6ER-CQZW)
|
||||
|
||||
[[16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Greenspan2020-marker)] David Greenspan. [Implementing Fractional Indexing](https://observablehq.com/@dgreensp/implementing-fractional-indexing). *observablehq.com*, October 2020. Archived at [perma.cc/5N4R-MREN](https://perma.cc/5N4R-MREN)
|
||||
[[16](ch03.html#Greenspan2020-marker)] David Greenspan. [Implementing Fractional Indexing](https://observablehq.com/@dgreensp/implementing-fractional-indexing). *observablehq.com*, October 2020. Archived at [perma.cc/5N4R-MREN](https://perma.cc/5N4R-MREN)
|
||||
|
||||
[[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Schemaless-marker)] Martin Fowler. [Schemaless Data Structures](http://martinfowler.com/articles/schemaless/). *martinfowler.com*, January 2013.
|
||||
[[17](ch03.html#Schemaless-marker)] Martin Fowler. [Schemaless Data Structures](http://martinfowler.com/articles/schemaless/). *martinfowler.com*, January 2013.
|
||||
|
||||
[[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Awadallah2009-marker)] Amr Awadallah. [Schema-on-Read vs. Schema-on-Write](https://www.slideshare.net/awadallah/schemaonread-vs-schemaonwrite). At *Berkeley EECS RAD Lab Retreat*, Santa Cruz, CA, May 2009. Archived at [perma.cc/DTB2-JCFR](https://perma.cc/DTB2-JCFR)
|
||||
[[18](ch03.html#Awadallah2009-marker)] Amr Awadallah. [Schema-on-Read vs. Schema-on-Write](https://www.slideshare.net/awadallah/schemaonread-vs-schemaonwrite). At *Berkeley EECS RAD Lab Retreat*, Santa Cruz, CA, May 2009. Archived at [perma.cc/DTB2-JCFR](https://perma.cc/DTB2-JCFR)
|
||||
|
||||
[[19](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Odersky2013-marker)] Martin Odersky. [The Trouble with Types](http://www.infoq.com/presentations/data-types-issues). At *Strange Loop*, September 2013. Archived at [perma.cc/85QE-PVEP](https://perma.cc/85QE-PVEP)
|
||||
[[19](ch03.html#Odersky2013-marker)] Martin Odersky. [The Trouble with Types](http://www.infoq.com/presentations/data-types-issues). At *Strange Loop*, September 2013. Archived at [perma.cc/85QE-PVEP](https://perma.cc/85QE-PVEP)
|
||||
|
||||
[[20](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Irwin2013-marker)] Conrad Irwin. [MongoDB—Confessions of a PostgreSQL Lover](https://speakerdeck.com/conradirwin/mongodb-confessions-of-a-postgresql-lover). At *HTML5DevConf*, October 2013. Archived at [perma.cc/C2J6-3AL5](https://perma.cc/C2J6-3AL5)
|
||||
[[20](ch03.html#Irwin2013-marker)] Conrad Irwin. [MongoDB—Confessions of a PostgreSQL Lover](https://speakerdeck.com/conradirwin/mongodb-confessions-of-a-postgresql-lover). At *HTML5DevConf*, October 2013. Archived at [perma.cc/C2J6-3AL5](https://perma.cc/C2J6-3AL5)
|
||||
|
||||
[[21](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Percona2023-marker)] [Percona Toolkit Documentation: pt-online-schema-change](https://docs.percona.com/percona-toolkit/pt-online-schema-change.html). *docs.percona.com*, 2023. Archived at [perma.cc/9K8R-E5UH](https://perma.cc/9K8R-E5UH)
|
||||
[[21](ch03.html#Percona2023-marker)] [Percona Toolkit Documentation: pt-online-schema-change](https://docs.percona.com/percona-toolkit/pt-online-schema-change.html). *docs.percona.com*, 2023. Archived at [perma.cc/9K8R-E5UH](https://perma.cc/9K8R-E5UH)
|
||||
|
||||
[[22](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Noach2016-marker)] Shlomi Noach. [gh-ost: GitHub’s Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72)
|
||||
[[22](ch03.html#Noach2016-marker)] Shlomi Noach. [gh-ost: GitHub’s Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72)
|
||||
|
||||
[[23](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Mukherjee2022-marker)] Shayon Mukherjee. [pg-osc: Zero downtime schema changes in PostgreSQL](https://www.shayon.dev/post/2022/47/pg-osc-zero-downtime-schema-changes-in-postgresql/). *shayon.dev*, February 2022. Archived at [perma.cc/35WN-7WMY](https://perma.cc/35WN-7WMY)
|
||||
[[23](ch03.html#Mukherjee2022-marker)] Shayon Mukherjee. [pg-osc: Zero downtime schema changes in PostgreSQL](https://www.shayon.dev/post/2022/47/pg-osc-zero-downtime-schema-changes-in-postgresql/). *shayon.dev*, February 2022. Archived at [perma.cc/35WN-7WMY](https://perma.cc/35WN-7WMY)
|
||||
|
||||
[[24](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#PerezAradros2023-marker)] Carlos Pérez-Aradros Herce. [Introducing pgroll: zero-downtime, reversible, schema migrations for Postgres](https://xata.io/blog/pgroll-schema-migrations-postgres). *xata.io*, October 2023. Archived at [archive.org](https://web.archive.org/web/20231008161750/https://xata.io/blog/pgroll-schema-migrations-postgres)
|
||||
[[24](ch03.html#PerezAradros2023-marker)] Carlos Pérez-Aradros Herce. [Introducing pgroll: zero-downtime, reversible, schema migrations for Postgres](https://xata.io/blog/pgroll-schema-migrations-postgres). *xata.io*, October 2023. Archived at [archive.org](https://web.archive.org/web/20231008161750/https://xata.io/blog/pgroll-schema-migrations-postgres)
|
||||
|
||||
[[25](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Corbett2012_ch2-marker)] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012.
|
||||
[[25](ch03.html#Corbett2012_ch2-marker)] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012.
|
||||
|
||||
[[26](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#BurlesonCluster-marker)] Donald K. Burleson. [Reduce I/O with Oracle Cluster Tables](http://www.dba-oracle.com/oracle_tip_hash_index_cluster_table.htm). *dba-oracle.com*. Archived at [perma.cc/7LBJ-9X2C](https://perma.cc/7LBJ-9X2C)
|
||||
[[26](ch03.html#BurlesonCluster-marker)] Donald K. Burleson. [Reduce I/O with Oracle Cluster Tables](http://www.dba-oracle.com/oracle_tip_hash_index_cluster_table.htm). *dba-oracle.com*. Archived at [perma.cc/7LBJ-9X2C](https://perma.cc/7LBJ-9X2C)
|
||||
|
||||
[[27](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Chang2006_ch2-marker)] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006.
|
||||
[[27](ch03.html#Chang2006_ch2-marker)] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006.
|
||||
|
||||
[[28](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Walmsley2015-marker)] Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O’Reilly Media, December 2015. ISBN: 9781491915080
|
||||
[[28](ch03.html#Walmsley2015-marker)] Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O’Reilly Media, December 2015. ISBN: 9781491915080
|
||||
|
||||
[[29](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bryan2013-marker)] Paul C. Bryan, Kris Zyp, and Mark Nottingham. [JavaScript Object Notation (JSON) Pointer](https://www.rfc-editor.org/rfc/rfc6901). RFC 6901, IETF, April 2013.
|
||||
[[29](ch03.html#Bryan2013-marker)] Paul C. Bryan, Kris Zyp, and Mark Nottingham. [JavaScript Object Notation (JSON) Pointer](https://www.rfc-editor.org/rfc/rfc6901). RFC 6901, IETF, April 2013.
|
||||
|
||||
[[30](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Goessner2024-marker)] Stefan Gössner, Glyn Normington, and Carsten Bormann. [JSONPath: Query Expressions for JSON](https://www.rfc-editor.org/rfc/rfc9535.html). RFC 9535, IETF, February 2024.
|
||||
[[30](ch03.html#Goessner2024-marker)] Stefan Gössner, Glyn Normington, and Carsten Bormann. [JSONPath: Query Expressions for JSON](https://www.rfc-editor.org/rfc/rfc9535.html). RFC 9535, IETF, February 2024.
|
||||
|
||||
[[31](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Page1999-marker)] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [The PageRank Citation Ranking: Bringing Order to the Web](http://ilpubs.stanford.edu:8090/422/). Technical Report 1999-66, Stanford University InfoLab, November 1999. Archived at [perma.cc/UML9-UZHW](https://perma.cc/UML9-UZHW)
|
||||
[[31](ch03.html#Page1999-marker)] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [The PageRank Citation Ranking: Bringing Order to the Web](http://ilpubs.stanford.edu:8090/422/). Technical Report 1999-66, Stanford University InfoLab, November 1999. Archived at [perma.cc/UML9-UZHW](https://perma.cc/UML9-UZHW)
|
||||
|
||||
[[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bronson2013-marker)] Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook’s Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013.
|
||||
[[32](ch03.html#Bronson2013-marker)] Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook’s Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013.
|
||||
|
||||
[[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Noy2019-marker)] Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. [Industry-Scale Knowledge Graphs: Lessons and Challenges](https://cacm.acm.org/magazines/2019/8/238342-industry-scale-knowledge-graphs/fulltext). *Communications of the ACM*, volume 62, issue 8, pages 36–43, August 2019. [doi:10.1145/3331166](https://doi.org/10.1145/3331166)
|
||||
[[33](ch03.html#Noy2019-marker)] Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. [Industry-Scale Knowledge Graphs: Lessons and Challenges](https://cacm.acm.org/magazines/2019/8/238342-industry-scale-knowledge-graphs/fulltext). *Communications of the ACM*, volume 62, issue 8, pages 36–43, August 2019. [doi:10.1145/3331166](https://doi.org/10.1145/3331166)
|
||||
|
||||
[[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Feng2023-marker)] Xiyang Feng, Guodong Jin, Ziyi Chen, Chang Liu, and Semih Salihoğlu. [KÙZU Graph Database Management System](https://www.cidrdb.org/cidr2023/papers/p48-jin.pdf). At *3th Annual Conference on Innovative Data Systems Research* (CIDR 2023), January 2023.
|
||||
[[34](ch03.html#Feng2023-marker)] Xiyang Feng, Guodong Jin, Ziyi Chen, Chang Liu, and Semih Salihoğlu. [KÙZU Graph Database Management System](https://www.cidrdb.org/cidr2023/papers/p48-jin.pdf). At *3th Annual Conference on Innovative Data Systems Research* (CIDR 2023), January 2023.
|
||||
|
||||
[[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Besta2019-marker)] Maciej Besta, Emanuel Peter, Robert Gerstenberger, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler. [Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries](https://arxiv.org/pdf/1910.09017.pdf). *arxiv.org*, October 2019.
|
||||
[[35](ch03.html#Besta2019-marker)] Maciej Besta, Emanuel Peter, Robert Gerstenberger, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler. [Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries](https://arxiv.org/pdf/1910.09017.pdf). *arxiv.org*, October 2019.
|
||||
|
||||
[[36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#TinkerPop2023-marker)] [Apache TinkerPop 3.6.3 Documentation](https://tinkerpop.apache.org/docs/3.6.3/reference/). *tinkerpop.apache.org*, May 2023. Archived at [perma.cc/KM7W-7PAT](https://perma.cc/KM7W-7PAT)
|
||||
[[36](ch03.html#TinkerPop2023-marker)] [Apache TinkerPop 3.6.3 Documentation](https://tinkerpop.apache.org/docs/3.6.3/reference/). *tinkerpop.apache.org*, May 2023. Archived at [perma.cc/KM7W-7PAT](https://perma.cc/KM7W-7PAT)
|
||||
|
||||
[[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Francis2018-marker)] Nadime Francis, Alastair Green, Paolo Guagliardo, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Stefan Plantikow, Mats Rydberg, Petra Selmer, and Andrés Taylor. [Cypher: An Evolving Query Language for Property Graphs](https://core.ac.uk/download/pdf/158372754.pdf). At *International Conference on Management of Data* (SIGMOD), pages 1433–1445, May 2018. [doi:10.1145/3183713.3190657](https://doi.org/10.1145/3183713.3190657)
|
||||
[[37](ch03.html#Francis2018-marker)] Nadime Francis, Alastair Green, Paolo Guagliardo, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Stefan Plantikow, Mats Rydberg, Petra Selmer, and Andrés Taylor. [Cypher: An Evolving Query Language for Property Graphs](https://core.ac.uk/download/pdf/158372754.pdf). At *International Conference on Management of Data* (SIGMOD), pages 1433–1445, May 2018. [doi:10.1145/3183713.3190657](https://doi.org/10.1145/3183713.3190657)
|
||||
|
||||
[[38](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#EifremTweet-marker)] Emil Eifrem. [Twitter correspondence](https://twitter.com/emileifrem/status/419107961512804352), January 2014. Archived at [perma.cc/WM4S-BW64](https://perma.cc/WM4S-BW64)
|
||||
[[38](ch03.html#EifremTweet-marker)] Emil Eifrem. [Twitter correspondence](https://twitter.com/emileifrem/status/419107961512804352), January 2014. Archived at [perma.cc/WM4S-BW64](https://perma.cc/WM4S-BW64)
|
||||
|
||||
[[39](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Tisiot2021-marker)] Francesco Tisiot. [Explore the new SEARCH and CYCLE features in PostgreSQL® 14](https://aiven.io/blog/explore-the-new-search-and-cycle-features-in-postgresql-14). *aiven.io*, December 2021. Archived at [perma.cc/J6BT-83UZ](https://perma.cc/J6BT-83UZ)
|
||||
[[39](ch03.html#Tisiot2021-marker)] Francesco Tisiot. [Explore the new SEARCH and CYCLE features in PostgreSQL® 14](https://aiven.io/blog/explore-the-new-search-and-cycle-features-in-postgresql-14). *aiven.io*, December 2021. Archived at [perma.cc/J6BT-83UZ](https://perma.cc/J6BT-83UZ)
|
||||
|
||||
[[40](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Goel2020-marker)] Gaurav Goel. [Understanding Hierarchies in Oracle](https://towardsdatascience.com/understanding-hierarchies-in-oracle-43f85561f3d9). *towardsdatascience.com*, May 2020. Archived at [perma.cc/5ZLR-Q7EW](https://perma.cc/5ZLR-Q7EW)
|
||||
[[40](ch03.html#Goel2020-marker)] Gaurav Goel. [Understanding Hierarchies in Oracle](https://towardsdatascience.com/understanding-hierarchies-in-oracle-43f85561f3d9). *towardsdatascience.com*, May 2020. Archived at [perma.cc/5ZLR-Q7EW](https://perma.cc/5ZLR-Q7EW)
|
||||
|
||||
[[41](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Deutsch2022-marker)] Alin Deutsch, Nadime Francis, Alastair Green, Keith Hare, Bei Li, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Wim Martens, Jan Michels, Filip Murlak, Stefan Plantikow, Petra Selmer, Oskar van Rest, Hannes Voigt, Domagoj Vrgoč, Mingxi Wu, and Fred Zemke. [Graph Pattern Matching in GQL and SQL/PGQ](https://arxiv.org/abs/2112.06217). At *International Conference on Management of Data* (SIGMOD), pages 2246–2258, June 2022. [doi:10.1145/3514221.3526057](https://doi.org/10.1145/3514221.3526057)
|
||||
[[41](ch03.html#Deutsch2022-marker)] Alin Deutsch, Nadime Francis, Alastair Green, Keith Hare, Bei Li, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Wim Martens, Jan Michels, Filip Murlak, Stefan Plantikow, Petra Selmer, Oskar van Rest, Hannes Voigt, Domagoj Vrgoč, Mingxi Wu, and Fred Zemke. [Graph Pattern Matching in GQL and SQL/PGQ](https://arxiv.org/abs/2112.06217). At *International Conference on Management of Data* (SIGMOD), pages 2246–2258, June 2022. [doi:10.1145/3514221.3526057](https://doi.org/10.1145/3514221.3526057)
|
||||
|
||||
[[42](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Green2019-marker)] Alastair Green. [SQL... and now GQL](https://opencypher.org/articles/2019/09/12/SQL-and-now-GQL/). *opencypher.org*, September 2019. Archived at [perma.cc/AFB2-3SY7](https://perma.cc/AFB2-3SY7)
|
||||
[[42](ch03.html#Green2019-marker)] Alastair Green. [SQL... and now GQL](https://opencypher.org/articles/2019/09/12/SQL-and-now-GQL/). *opencypher.org*, September 2019. Archived at [perma.cc/AFB2-3SY7](https://perma.cc/AFB2-3SY7)
|
||||
|
||||
[[43](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Deutsch2018-marker)] Alin Deutsch, Yu Xu, and Mingxi Wu. [Seamless Syntactic and Semantic Integration of Query Primitives over Relational and Graph Data in GSQL](https://cdn2.hubspot.net/hubfs/4114546/IntegrationQuery PrimitivesGSQL.pdf). *tigergraph.com*, November 2018. Archived at [perma.cc/JG7J-Y35X](https://perma.cc/JG7J-Y35X)
|
||||
[[43](ch03.html#Deutsch2018-marker)] Alin Deutsch, Yu Xu, and Mingxi Wu. [Seamless Syntactic and Semantic Integration of Query Primitives over Relational and Graph Data in GSQL](https://cdn2.hubspot.net/hubfs/4114546/IntegrationQuery PrimitivesGSQL.pdf). *tigergraph.com*, November 2018. Archived at [perma.cc/JG7J-Y35X](https://perma.cc/JG7J-Y35X)
|
||||
|
||||
[[44](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#vanRest2016-marker)] Oskar van Rest, Sungpack Hong, Jinha Kim, Xuming Meng, and Hassan Chafi. [PGQL: a property graph query language](https://event.cwi.nl/grades/2016/07-VanRest.pdf). At *4th International Workshop on Graph Data Management Experiences and Systems* (GRADES), June 2016. [doi:10.1145/2960414.2960421](https://doi.org/10.1145/2960414.2960421)
|
||||
[[44](ch03.html#vanRest2016-marker)] Oskar van Rest, Sungpack Hong, Jinha Kim, Xuming Meng, and Hassan Chafi. [PGQL: a property graph query language](https://event.cwi.nl/grades/2016/07-VanRest.pdf). At *4th International Workshop on Graph Data Management Experiences and Systems* (GRADES), June 2016. [doi:10.1145/2960414.2960421](https://doi.org/10.1145/2960414.2960421)
|
||||
|
||||
[[45](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#NeptuneDataModel-marker)] Amazon Web Services. [Neptune Graph Data Model](https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-data-model.html). Amazon Neptune User Guide, *docs.aws.amazon.com*. Archived at [perma.cc/CX3T-EZU9](https://perma.cc/CX3T-EZU9)
|
||||
[[45](ch03.html#NeptuneDataModel-marker)] Amazon Web Services. [Neptune Graph Data Model](https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-data-model.html). Amazon Neptune User Guide, *docs.aws.amazon.com*. Archived at [perma.cc/CX3T-EZU9](https://perma.cc/CX3T-EZU9)
|
||||
|
||||
[[46](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#DatomicDataModel-marker)] Cognitect. [Datomic Data Model](https://docs.datomic.com/cloud/whatis/data-model.html). Datomic Cloud Documentation, *docs.datomic.com*. Archived at [perma.cc/LGM9-LEUT](https://perma.cc/LGM9-LEUT)
|
||||
[[46](ch03.html#DatomicDataModel-marker)] Cognitect. [Datomic Data Model](https://docs.datomic.com/cloud/whatis/data-model.html). Datomic Cloud Documentation, *docs.datomic.com*. Archived at [perma.cc/LGM9-LEUT](https://perma.cc/LGM9-LEUT)
|
||||
|
||||
[[47](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Beckett2011-marker)] David Beckett and Tim Berners-Lee. [Turtle – Terse RDF Triple Language](http://www.w3.org/TeamSubmission/turtle/). W3C Team Submission, March 2011.
|
||||
[[47](ch03.html#Beckett2011-marker)] David Beckett and Tim Berners-Lee. [Turtle – Terse RDF Triple Language](http://www.w3.org/TeamSubmission/turtle/). W3C Team Submission, March 2011.
|
||||
|
||||
[[48](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Target2018-marker)] Sinclair Target. [Whatever Happened to the Semantic Web?](https://twobithistory.org/2018/05/27/semantic-web.html) *twobithistory.org*, May 2018. Archived at [perma.cc/M8GL-9KHS](https://perma.cc/M8GL-9KHS)
|
||||
[[48](ch03.html#Target2018-marker)] Sinclair Target. [Whatever Happened to the Semantic Web?](https://twobithistory.org/2018/05/27/semantic-web.html) *twobithistory.org*, May 2018. Archived at [perma.cc/M8GL-9KHS](https://perma.cc/M8GL-9KHS)
|
||||
|
||||
[[49](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#MendelGleason2022-marker)] Gavin Mendel-Gleason. [The Semantic Web is Dead – Long Live the Semantic Web!](https://terminusdb.com/blog/the-semantic-web-is-dead/) *terminusdb.com*, August 2022. Archived at [perma.cc/G2MZ-DSS3](https://perma.cc/G2MZ-DSS3)
|
||||
[[49](ch03.html#MendelGleason2022-marker)] Gavin Mendel-Gleason. [The Semantic Web is Dead – Long Live the Semantic Web!](https://terminusdb.com/blog/the-semantic-web-is-dead/) *terminusdb.com*, August 2022. Archived at [perma.cc/G2MZ-DSS3](https://perma.cc/G2MZ-DSS3)
|
||||
|
||||
[[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Sporny2014-marker)] Manu Sporny. [JSON-LD and Why I Hate the Semantic Web](http://manu.sporny.org/2014/json-ld-origins-2/). *manu.sporny.org*, January 2014. Archived at [perma.cc/7PT4-PJKF](https://perma.cc/7PT4-PJKF)
|
||||
[[50](ch03.html#Sporny2014-marker)] Manu Sporny. [JSON-LD and Why I Hate the Semantic Web](http://manu.sporny.org/2014/json-ld-origins-2/). *manu.sporny.org*, January 2014. Archived at [perma.cc/7PT4-PJKF](https://perma.cc/7PT4-PJKF)
|
||||
|
||||
[[51](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#MichiganOntologies-marker)] University of Michigan Library. [Biomedical Ontologies and Controlled Vocabularies](https://guides.lib.umich.edu/ontology), *guides.lib.umich.edu/ontology*. Archived at [perma.cc/Q5GA-F2N8](https://perma.cc/Q5GA-F2N8)
|
||||
[[51](ch03.html#MichiganOntologies-marker)] University of Michigan Library. [Biomedical Ontologies and Controlled Vocabularies](https://guides.lib.umich.edu/ontology), *guides.lib.umich.edu/ontology*. Archived at [perma.cc/Q5GA-F2N8](https://perma.cc/Q5GA-F2N8)
|
||||
|
||||
[[52](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#OpenGraph-marker)] Facebook. [The Open Graph protocol](https://ogp.me/), *ogp.me*. Archived at [perma.cc/C49A-GUSY](https://perma.cc/C49A-GUSY)
|
||||
[[52](ch03.html#OpenGraph-marker)] Facebook. [The Open Graph protocol](https://ogp.me/), *ogp.me*. Archived at [perma.cc/C49A-GUSY](https://perma.cc/C49A-GUSY)
|
||||
|
||||
[[53](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Haughey2015-marker)] Matt Haughey. [Everything you ever wanted to know about unfurling but were afraid to ask /or/ How to make your site previews look amazing in Slack](https://medium.com/slack-developer-blog/everything-you-ever-wanted-to-know-about-unfurling-but-were-afraid-to-ask-or-how-to-make-your-e64b4bb9254). *medium.com*, November 2015. Archived at [perma.cc/C7S8-4PZN](https://perma.cc/C7S8-4PZN)
|
||||
[[53](ch03.html#Haughey2015-marker)] Matt Haughey. [Everything you ever wanted to know about unfurling but were afraid to ask /or/ How to make your site previews look amazing in Slack](https://medium.com/slack-developer-blog/everything-you-ever-wanted-to-know-about-unfurling-but-were-afraid-to-ask-or-how-to-make-your-e64b4bb9254). *medium.com*, November 2015. Archived at [perma.cc/C7S8-4PZN](https://perma.cc/C7S8-4PZN)
|
||||
|
||||
[[54](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#W3CRDF-marker)] W3C RDF Working Group. [Resource Description Framework (RDF)](http://www.w3.org/RDF/). *w3.org*, February 2004.
|
||||
[[54](ch03.html#W3CRDF-marker)] W3C RDF Working Group. [Resource Description Framework (RDF)](http://www.w3.org/RDF/). *w3.org*, February 2004.
|
||||
|
||||
[[55](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Harris2013-marker)] Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. [SPARQL 1.1 Query Language](http://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013.
|
||||
[[55](ch03.html#Harris2013-marker)] Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. [SPARQL 1.1 Query Language](http://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013.
|
||||
|
||||
[[56](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Green2013-marker)] Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou. [Datalog and Recursive Query Processing](http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf). *Foundations and Trends in Databases*, volume 5, issue 2, pages 105–195, November 2013. [doi:10.1561/1900000017](https://doi.org/10.1561/1900000017)
|
||||
[[56](ch03.html#Green2013-marker)] Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou. [Datalog and Recursive Query Processing](http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf). *Foundations and Trends in Databases*, volume 5, issue 2, pages 105–195, November 2013. [doi:10.1561/1900000017](https://doi.org/10.1561/1900000017)
|
||||
|
||||
[[57](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Ceri1989-marker)] Stefano Ceri, Georg Gottlob, and Letizia Tanca. [What You Always Wanted to Know About Datalog (And Never Dared to Ask)](https://www.researchgate.net/profile/Letizia_Tanca/publication/3296132_What_you_always_wanted_to_know_about_Datalog_and_never_dared_to_ask/links/0fcfd50ca2d20473ca000000.pdf). *IEEE Transactions on Knowledge and Data Engineering*, volume 1, issue 1, pages 146–166, March 1989. [doi:10.1109/69.43410](https://doi.org/10.1109/69.43410)
|
||||
[[57](ch03.html#Ceri1989-marker)] Stefano Ceri, Georg Gottlob, and Letizia Tanca. [What You Always Wanted to Know About Datalog (And Never Dared to Ask)](https://www.researchgate.net/profile/Letizia_Tanca/publication/3296132_What_you_always_wanted_to_know_about_Datalog_and_never_dared_to_ask/links/0fcfd50ca2d20473ca000000.pdf). *IEEE Transactions on Knowledge and Data Engineering*, volume 1, issue 1, pages 146–166, March 1989. [doi:10.1109/69.43410](https://doi.org/10.1109/69.43410)
|
||||
|
||||
[[58](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Abiteboul1995-marker)] Serge Abiteboul, Richard Hull, and Victor Vianu. [*Foundations of Databases*](http://webdam.inria.fr/Alice/). Addison-Wesley, 1995. ISBN: 9780201537710, available online at [*webdam.inria.fr/Alice*](http://webdam.inria.fr/Alice/)
|
||||
[[58](ch03.html#Abiteboul1995-marker)] Serge Abiteboul, Richard Hull, and Victor Vianu. [*Foundations of Databases*](http://webdam.inria.fr/Alice/). Addison-Wesley, 1995. ISBN: 9780201537710, available online at [*webdam.inria.fr/Alice*](http://webdam.inria.fr/Alice/)
|
||||
|
||||
[[59](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Meyer2020-marker)] Scott Meyer, Andrew Carter, and Andrew Rodriguez. [LIquid: The soul of a new graph database, Part 2](https://engineering.linkedin.com/blog/2020/liquid--the-soul-of-a-new-graph-database--part-2). *engineering.linkedin.com*, September 2020. Archived at [perma.cc/K9M4-PD6Q](https://perma.cc/K9M4-PD6Q)
|
||||
[[59](ch03.html#Meyer2020-marker)] Scott Meyer, Andrew Carter, and Andrew Rodriguez. [LIquid: The soul of a new graph database, Part 2](https://engineering.linkedin.com/blog/2020/liquid--the-soul-of-a-new-graph-database--part-2). *engineering.linkedin.com*, September 2020. Archived at [perma.cc/K9M4-PD6Q](https://perma.cc/K9M4-PD6Q)
|
||||
|
||||
[[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Bessey2024-marker)] Matt Bessey. [Why, after 6 years, I’m over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA)
|
||||
[[60](ch03.html#Bessey2024-marker)] Matt Bessey. [Why, after 6 years, I’m over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA)
|
||||
|
||||
[[61](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Betts2012-marker)] Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, and Mani Subramanian. [*Exploring CQRS and Event Sourcing*](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10)). Microsoft Patterns & Practices, July 2012. ISBN: 1621140164, archived at [perma.cc/7A39-3NM8](https://perma.cc/7A39-3NM8)
|
||||
[[61](ch03.html#Betts2012-marker)] Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, and Mani Subramanian. [*Exploring CQRS and Event Sourcing*](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10)). Microsoft Patterns & Practices, July 2012. ISBN: 1621140164, archived at [perma.cc/7A39-3NM8](https://perma.cc/7A39-3NM8)
|
||||
|
||||
[[62](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Young2014-marker)] Greg Young. [CQRS and Event Sourcing](https://www.youtube.com/watch?v=JHGkaShoyNs). At *Code on the Beach*, August 2014.
|
||||
[[62](ch03.html#Young2014-marker)] Greg Young. [CQRS and Event Sourcing](https://www.youtube.com/watch?v=JHGkaShoyNs). At *Code on the Beach*, August 2014.
|
||||
|
||||
[[63](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Young2010-marker)] Greg Young. [CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf). *cqrs.wordpress.com*, November 2010. Archived at [perma.cc/X5R6-R47F](https://perma.cc/X5R6-R47F)
|
||||
[[63](ch03.html#Young2010-marker)] Greg Young. [CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf). *cqrs.wordpress.com*, November 2010. Archived at [perma.cc/X5R6-R47F](https://perma.cc/X5R6-R47F)
|
||||
|
||||
[[64](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Petersohn2020-marker)] Devin Petersohn, Stephen Macke, Doris Xin, William Ma, Doris Lee, Xiangxi Mo, Joseph E. Gonzalez, Joseph M. Hellerstein, Anthony D. Joseph, and Aditya Parameswaran. [Towards Scalable Dataframe Systems](http://www.vldb.org/pvldb/vol13/p2033-petersohn.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 11, pages 2033–2046. [doi:10.14778/3407790.3407807](https://doi.org/10.14778/3407790.3407807)
|
||||
[[64](ch03.html#Petersohn2020-marker)] Devin Petersohn, Stephen Macke, Doris Xin, William Ma, Doris Lee, Xiangxi Mo, Joseph E. Gonzalez, Joseph M. Hellerstein, Anthony D. Joseph, and Aditya Parameswaran. [Towards Scalable Dataframe Systems](http://www.vldb.org/pvldb/vol13/p2033-petersohn.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 11, pages 2033–2046. [doi:10.14778/3407790.3407807](https://doi.org/10.14778/3407790.3407807)
|
||||
|
||||
[[65](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Papadopoulos2016-marker)] Stavros Papadopoulos, Kushal Datta, Samuel Madden, and Timothy Mattson. [The TileDB Array Data Storage Manager](https://www.vldb.org/pvldb/vol10/p349-papadopoulos.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 4, pages 349–360, November 2016. [doi:10.14778/3025111.3025117](https://doi.org/10.14778/3025111.3025117)
|
||||
[[65](ch03.html#Papadopoulos2016-marker)] Stavros Papadopoulos, Kushal Datta, Samuel Madden, and Timothy Mattson. [The TileDB Array Data Storage Manager](https://www.vldb.org/pvldb/vol10/p349-papadopoulos.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 4, pages 349–360, November 2016. [doi:10.14778/3025111.3025117](https://doi.org/10.14778/3025111.3025117)
|
||||
|
||||
[[66](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Rusu2022-marker)] Florin Rusu. [Multidimensional Array Data Management](http://faculty.ucmerced.edu/frusu/Papers/Report/2022-09-fntdb-arrays.pdf). *Foundations and Trends in Databases*, volume 12, numbers 2–3, pages 69–220, February 2023. [doi:10.1561/1900000069](https://doi.org/10.1561/1900000069)
|
||||
[[66](ch03.html#Rusu2022-marker)] Florin Rusu. [Multidimensional Array Data Management](http://faculty.ucmerced.edu/frusu/Papers/Report/2022-09-fntdb-arrays.pdf). *Foundations and Trends in Databases*, volume 12, numbers 2–3, pages 69–220, February 2023. [doi:10.1561/1900000069](https://doi.org/10.1561/1900000069)
|
||||
|
||||
[[67](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Targett2023-marker)] Ed Targett. [Bloomberg, Man Group team up to develop open source “ArcticDB” database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV)
|
||||
[[67](ch03.html#Targett2023-marker)] Ed Targett. [Bloomberg, Man Group team up to develop open source “ArcticDB” database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV)
|
||||
|
||||
[[68](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#Benson2007-marker)] Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and David L. Wheeler. [GenBank](https://academic.oup.com/nar/article/36/suppl_1/D25/2507746). *Nucleic Acids Research*, volume 36, database issue, pages D25–D30, December 2007. [doi:10.1093/nar/gkm929](https://doi.org/10.1093/nar/gkm929)
|
||||
[[68](ch03.html#Benson2007-marker)] Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and David L. Wheeler. [GenBank](https://academic.oup.com/nar/article/36/suppl_1/D25/2507746). *Nucleic Acids Research*, volume 36, database issue, pages D25–D30, December 2007. [doi:10.1093/nar/gkm929](https://doi.org/10.1093/nar/gkm929)
|
||||
|
||||
|
||||
------
|
||||
|
Loading…
Reference in New Issue
Block a user