From fbcf555ca95d0c777429793ba2c9173e2aaf88f7 Mon Sep 17 00:00:00 2001 From: Hiroshi Yoshioka <40815708+hyoshioka0128@users.noreply.github.com> Date: Fri, 19 Nov 2021 12:16:32 +0900 Subject: [PATCH] =?UTF-8?q?Typo=20"Azure=20CosmosDB"=E2=86=92"Azure=20Cosm?= =?UTF-8?q?os=20DB"?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/README.md #PingMSFTDocs --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 20a9380..4847789 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ A **migration guide** to change applications which used the Spark 2.4 connector [![Build Status](https://travis-ci.org/Azure/azure-cosmosdb-spark.svg?branch=master)](https://travis-ci.org/Azure/azure-cosmosdb-spark) -`azure-cosmosdb-spark` is the official connector for [Azure CosmosDB](http://cosmosdb.com) and [Apache Spark](http://spark.apache.org). The connector allows you to easily read to and write from Azure Cosmos DB via Apache Spark DataFrames in `python` and `scala`. It also allows you to easily create a lambda architecture for batch-processing, stream-processing, and a serving layer while being globally replicated and minimizing the latency involved in working with big data. +`azure-cosmosdb-spark` is the official connector for [Azure Cosmos DB](http://cosmosdb.com) and [Apache Spark](http://spark.apache.org). The connector allows you to easily read to and write from Azure Cosmos DB via Apache Spark DataFrames in `python` and `scala`. It also allows you to easily create a lambda architecture for batch-processing, stream-processing, and a serving layer while being globally replicated and minimizing the latency involved in working with big data.