In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Want to easily query data in many different ways, even in ways you never expected? Want to visualize logs in a variety of ways? Also support time-based, text-based and other types of real-time filters?
Taking advantage of the excellent performance and extensible approach of Elastic stack, we will easily implement it through two examples.
This article is published by DNC Magazine for Developers and Architects. Download this magazine [PDF] here or subscribe to this magazine for free to download all previous and current versions.
In this article, I will introduce the popular search engine Elasticsearch, its accompanying visualization application Kibana, and show how the .NET core can be easily integrated with Elastic stack.
Elasticsearch and .net Core
We will begin to explore Elasticsearch's REST API by indexing and querying some data. Next, we will use the official .net API of Elasticsearch to complete similar exercises. Once we are familiar with Elasticsearch and its API, we will create a logging module using .net Core and send the data to Elasticsearch. Kibana follows, visualizing Elasticsearch's index data in interesting ways.
I hope you find this article interesting and want to know more about the power of Elastic.
This article assumes that you already know the basics of C # and REST API. Use tools such as Visual Studio,Postman and Docker, but you can easily use alternatives such as VS Code and Fiddler.
Elasticsearch-introduction
As the core part, Elasticsearch is a document repository with powerful indexing capabilities, and you can search data through REST API. It is written in Java and is based on Apache Lucene, although these details are hidden in API.
Through the indexed fields, any stored (indexed) document can be found in many different aggregations.
However, ElasticSearch provides more than just powerful search capabilities for these indexed documents.
Fast, distributed, horizontally scalable, supporting real-time document storage and analysis, supporting hundreds of servers and PB-level index data. At the same time, as the core of Elastic stack (aka ELK), it provides powerful applications such as LogStash, Kibana and more.
Kibana is a powerful visual query Web application dedicated to Elasticsearch. With Kibana, you can easily create queries, charts, and dashboards for data indexed in Elasticsearch.
Elasticsearch opens up a REST API, and you will find that many of the documentation examples are HTTP calls, and you can try tools such as curl or postman. Of course, the API client has been written in many different languages, including .net, Java, Python, Ruby, and JavaScript.
If you want to read more, the official Elasticsearch website is probably the best place to go.
Docker is the simplest way to run locally
In this article, we need to connect to an Elasticsearch (and later Kibana) server. If you already have a server that runs locally or can be used, that's good. Otherwise, you need to build a server first.
You can choose to download and install Elasticsearch and Kibana on your local machine or on a VM or server that you can use. However, it is recommended that you use Docker to build Elasticsearch and Kibana in the simplest and purest way.
You can run the following command directly to get the container containing Elasticsearch and Kibana.
Docker run-it-- rm-p 9200 name esk nshou/elasticsearch-kibana 9200-p 5601
-it means to start the container in interactive mode and attach it to the terminal.
-- rm indicates that the container will be removed after exiting from the terminal.
-p map ports in the container to ports in the host
-- name gives the container a name. If you are not using it, you can use-- rm to manually stop / delete it.
Nshou/elasticsearch-kibana is the name of an image in Docker Hub. Someone has already prepared Elasticsearch and Kibana for you.
If you prefer to run in the background, you can use the parameter-d instead of-- it-- rm, and manually stop / delete the container.
Running multiple applications in the same container, as we do now, is suitable for this article, but not recommended for production containers!
You should be aware that once you delete the container, your data will disappear (delete it once you use the-rm option). Although it is good for local experiments, in the real world, if you do not want to lose data, please refer to the "data container" mode.
Docker is a great tool, and I encourage you to learn more about it, especially if you want to do something more important than just follow this article and quickly build an Elasticsearch server locally. The .NET Core with Docker has been well introduced in the previous article Building DockNetFiddle using Docker and .NET Core.
Just open http://localhost:9200 and http://localhost:5600 and check to see if both Elasticsearch and Kibana are available. (if you use docker toolbox, replace localhost with the virtual machine ip that hosts the Docker, and you can run docker-machine env default on the command line.)
Running Elasticsearch in docker
And kibana is ready.
Indexing and querying in Elasticsearch
Before we start writing any .net code, let's take a look at the basics. First index some documents in Elasticsearch (similar to saving to a database) so that we can run different queries against them.
Here, I'll use Postman to send HTTP requests to our Elasticsearch server, but you can use any other similar tool, such as Fiddler or curl.
The first thing we need to do is ask Elasticsearch to create a new index and index some documents (similar to inserting data into data). This is similar to storing data in a table / collection, the main difference (and purpose) is to allow the Elasticsearch cluster (here is just a node) to analyze and search document data.
Indexed documents are organized in Elasticsearch by index and type. In the past, it was often confusing to be compared with database tables. As described in this article, indexes are handled by Lucene and are closely tied to types in distributed cross-shards.
Send the following two requests to create the index and insert the document in the index (remember toolbox, if you use docker, use the virtual machine ip that hosts the Docker instead of localhost):
Create a new index named "default".
PUT localhost:9200/default
Index the document in the default index. Please note that we need to know what type of document we store ("product") and the ID of that document (such as 1, although you can use any value, as long as it is unique)
PUT localhost:9200/default/product/1 {"name": "Apple MacBook Pro", "description": "Latest MacBook Pro 13", "tags": ["laptops", "mac"]}
Create a new index
Index new documents
Before we verify the search function and query the data, index a few more "product". Try different "tags", such as "laptops" and "laptops", and remember to use different ids!
When you're done, let's search all indexed documents sorted by name. You can use the same content as the query string or GET/POST. The following two requests are equivalent:
GET http://localhost:9200/default/_search?q=*&sort=name.keyword:ascPOST http://localhost:9200/default/_search{ "query": {"match_all": {}}, "sort": [{"name.keyword": "asc"}]}
Let's try something more interesting, such as searching for all documents that contain "latest" in the "description" field and "laptops" in the "tags" field:
POST http://localhost:9200/default/_search{ "query": {"bool": {"must": [{"match": {"description": "latest"}}, {"match": {"tags": "laptops"}]}}, "sort": [{"name.keyword": "asc"}]}}
Search results
Kibana visual data
As the last part of the introduction, we will scratch the surface of the relevant knowledge of Kibana.
Assuming that you have indexed several documents in the previous step, open the Kibana server in Docker by visiting http://localhost:5601. You will notice that Kibana requires you to provide the default index mode, so you must tell it which Elasticsearch index it uses:
We created an index named "default" in the previous section, so we can use "default" as the index mode.
You also need to cancel the Index contains time-based events (Index contains time-based events) option because our document does not contain any time fields.
Add an index schema in Kibana
When you are done, open the Discover page using the menu on the left, and you should see all the latest documents inserted in the previous section. Try to select a different field and enter the relevant field or a filter in the search bar:
Visualization of data in kibana
Finally, we create a pie chart that shows the percentage of sales of "laptops" or "desktops". Using the previously indexed data, create a new "Pie Chart" in the left menu.
You can configure it on the pie chart (Pie Chart) page. Use "Count" as the size of the slice, and select "split slices" in the "buckets" section. Use "filters" as the aggregation type and add two filters: tags = "laptop" and tags = "desktoptops". Click run, and you will see something similar to the following figure:
Create a pie chart in Kibana
Be sure to enter search keywords that contain filtered items in the search bar and notice how the visualization changes.
Elasticsearch .Net API
After a brief introduction to Elasticsearch and Kibana, let's take a look at how we can index and query our documents using .net applications.
You may want to know why you are doing this instead of using HTTP API directly. I can give you a few reasons, and I'm sure you can find some of them yourself:
You don't want to expose the Elasticsearch cluster directly.
Elasticsearch may not be your primary database, and you may need to combine the results of your own database.
You want to include indexed documents from the storage / production server
The first thing to notice is to open this document, and there are two official APIs, Elasticsearch.Net and NEST, that support the .net Core project.
Elasticsearch.Net provides a low-level API for connecting to Elasticsearch, providing the ability to build / process requests and responses. It is a .net thin client.
NEST provides a higher level of API on top of Elasticsearch.Net. It can map objects to requests / responses, provide powerful query capabilities, and use index names, document types, and field types to build matching queries with HTTP REST API.
Elasticsearch .Net API
Because I'm using NEST, the first step is to create a new ASP .net Core application and install NEST using Package Manager.
Start indexing data using Nest
We will complete some of the steps before sending the HTTP request manually in the new ASP.Net Core application. If necessary, restart the Docker container to clean up the data, or manually delete the document / index through HTTP API and Postman.
Let's first create a POCO model for the product:
Public class Product {public Guid Id {get; set;} public string Name {get; set;} public string Description {get; set;} public string [] Tags {get; set;}}
Next, we create a new controller ProductController with a method to add a new "Product" and a way to find "Product" based on a single keyword:
[Route ("api/ [controller]")] public class ProductController: Controller {[HttpPost] public async Task
< IActionResult >Create ([FromBody] Product product) {} [HttpGet ("find")] public async Task
< IActionResult >Find (string term) {}}
In order to implement these methods, we need to connect to Elasticsearch first. Here is a correct demonstration of an ElasticClient connection. Because the class is thread-safe, the recommended approach is to use the singleton pattern in the application rather than creating a new connection on request.
For brevity, I will now use private static variables with hard-coded settings. Use the dependency injection configuration framework in .net Core, or view the code in Github.
It is conceivable that at least the URL of the connected Elasticsearch cluster needs to be provided. Of course, there are other optional parameters for authenticating with your cluster, setting timeouts, connection pooling, and so on.
Private static readonly ConnectionSettings connSettings = new ConnectionSettings (new Uri ("http://localhost:9200/")); private static readonly ElasticClient elasticClient = new ElasticClient (connSettings))
After the connection is established, indexing the document simply uses ElasticClient's Index/IndexAsync method:
[Route ("api/ [controller]")] public class ProductController: Controller {[HttpPost] public async Task Create ([FromBody] Product product) {} [HttpGet ("find")] public async Task Find (string term) {}
It's simple, isn't it? Unfortunately, if you send the following request to Postman, you will see the failure.
POST http://localhost:65113/api/product{ "name": "Dell XPS 13", "description": "Latest Dell XPS 13", "tags": ["laptops", "windows"]}
This is because NEST cannot determine which index to use when indexing documents! If you want to use HTTP API manually, you need to indicate the index of the document, the type of document, and the ID in URL, such as localhost:9200/default/product/1
NEST can infer the type of document (using the name of the class) and index fields by default (based on the type of field), but you need some help with the index name. You can specify a default index name, as well as a specific index name of a specific type.
ConnSettings = new ConnectionSettings (new Uri ("http://192.168.99.100:9200/")) .DefaultIndex (" default ") / / Optionally override the default index for specific types .MapDefaultTypeIndices (m = > m .Add (typeof (Product)," default "))
Try again after making these changes. You will see that NEST creates an index (if it does not already exist) and indexes the document. If you switch to Kibana, you can also see the document. It is important to note that:
Infer the document type from the name of the class, such as Product
The Id attribute is inferred as the identity in the class
Send all exposed properties to Elasticsearch
Documents indexed using NEST
Before we query the data, reconsider the way we create the index.
How do I create an index?
Now we have the fact that if the index does not exist, it will also be created. However, the way fields are indexed is important, and directly defines how Elasticsearch indexes and parses these fields. This is especially true for string fields, as "Text" and "Keyword" of two different field types are provided in Elasticsearch v5:
Fields of type Text will be parsed and broken down into words for use in more advanced Elasticsearch search functions
The Keyword field, on the other hand, will be "left as is" without analysis and can only be searched by its exact value.
You can use NEST index mapping attributes to generate the POCO model:
Public class Product {public Guid Id {get; set;} [Text (Name= "name")] public string Name {get; set;} [Text (Name= "description")] public string Description {get; set;} [Keyword (Name= "tag)] public string [] Tags {get; set;}}
However, we need to create the index first, and we must manually create and define the mapping of the index using ElasticClient API. This is very simple, especially if we just use attributes:
If (! elasticClient.IndexExists ("default") .Exists) {elasticClient.CreateIndex ("default", I = > I .Mappings (m = > m .Map (ms = > ms.AutoMap ();}
Send the request (GET localhost:92000/default) directly to Elasticsearch and notice that the mapping is the same as what we want.
Create an index map using NEST
Use Nest to query data
Now we have a ProductController controller that uses NEST to index "products". It's time to add a Find action to the controller to query the Elasticsearch for documents using NEST.
We just use a field to implement a simple search. You should observe all fields:
Fields mapped to the "Text" type can be parsed, and you can search for specific words in the "name" / "description" fields.
Fields mapped to "Keywords" remain intact and are not parsed. You can only match exactly in the tags field.
NEST provides a rich API for querying Elasticsearch that can be converted into standard HTTP API. Implementing the above query type is as simple as using the Search/SearchAsync method and building a SimpleQueryString as a parameter.
[HttpGet ("find")] public async Task Find (string term) {var res = await elasticClient.SearchAsync (x = > x .Query (Q = > Q. SimpleQueryString (qs = > qs.Query (term); if (! res.IsValid) {throw new InvalidOperationException (res.DebugInformation);} return Json (res.Documents);}
Use PostMan to test your new operation:
Use nest query
As you may be aware, our behavior is the same as sending a request to Elasticsearch manually:
GET http://localhost:9200/default/_search?q=*& creates an Elasticsearch log provider in .net Core
Now that we know some of the basics of NEST, let's try something more ambitious. We have created an ASP.Net Core application that implements our log provider and sends information to Elasticsearch with the help of the .NET Core logging framework.
The difference between the new log API in terms of logger and logger provider:
A logger that records information and events, as in a controller
You can add and enable multiple log providers (provider) for your application, and you can configure independent logging levels and log corresponding information / events.
The logging framework has built-in log providers (provider) for event logs, Azure, and so on, but as you'll see, creating your own is not complicated. For more information, refer to the official logging documentation of the .NET Core.
In the final part of this article, we will create a new log provider for Elasticsearch, enable it in our application, and use Kibana to view logged events.
Add a new log provider for Elasticsearch
The first thing to do is to define a new POCO object, which we will use as a document indexed using NEST, similar to the "Product" class we created earlier.
This will contain logging and optional information about any exceptions that may occur and related request data. Recording request data will come in handy because we can query / visualize the events we record according to specific requests.
Public class LogEntry {public DateTime DateTime {get; set;} public EventId EventId {get; set;} [Keyword] [JsonConverter (typeof (StringEnumConverter))] public Microsoft.Extensions.Logging.LogLevel Level {get; set;} [Keyword] public string Category {get; set;} public string Message {get; set;} [Keyword] public string TraceIdentifier {get; set;} [Keyword] public string UserName {get; set } [Keyword] public string ContentType {get; set;} [Keyword] public string Host {get; set;} [Keyword] public string Method {get; set;} [Keyword] public string Protocol {get; set;} [Keyword] public string Scheme {get; set;} public string Path {get; set;} public string PathBase {get; set;} public string QueryString {get Set;} public long? ContentLength {get; set;} public bool IsHttps {get; set;} public IRequestCookieCollection Cookies {get; set;} public IHeaderDictionary Headers {get; set;} [Keyword] public string ExceptionType {get; set;} public string ExceptionMessage {get; set;} public string Exception {get; set;} public bool HasException {get {return Exception! = null;}} public string StackTrace {get; set;}}
The next step is to implement the ILogger interface on a new class. As you might expect, this maps the data that needs to be recorded to a new LogEntry object and indexes it using ElasticClient.
We will use IHttpContextAccessor so that we can get the current HttpContext and extract the relevant request attributes.
The code that connects to the Elasticsearch and creates the index is not written here, which is no different from the previous operation. Use a different index or delete the "products" of the index in the previous section.
Note: you can use dependency injection and configuration articles to check the supporting code in Github.
The main method of implementation is Log, where we create a LogEntry and index it with NEST:
Public void Log
< TState >(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func
< TState, Exception, string >Formatter) {if (! IsEnabled (logLevel)) return; var message = formatter (state, exception); var entry = new LogEntry {EventId = eventId, DateTime = DateTime.UtcNow, Category = _ categoryName, Message = message, Level = logLevel}; var context = _ httpContextAccessor.HttpContext; if (context! = null) {entry.TraceIdentifier = context.TraceIdentifier Entry.UserName = context.User.Identity.Name; var request = context.Request; entry.ContentLength = request.ContentLength; entry.ContentType = request.ContentType; entry.Host = request.Host.Value; entry.IsHttps = request.IsHttps; entry.Method = request.Method; entry.Path = request.Path; entry.PathBase = request.PathBase; entry.Protocol = request.Protocol Entry.QueryString = request.QueryString.Value; entry.Scheme = request.Scheme; entry.Cookies = request.Cookies; entry.Headers = request.Headers;} if (exception! = null) {entry.Exception = exception.ToString (); entry.ExceptionMessage = exception.Message; entry.ExceptionType = exception.GetType (). Name; entry.StackTrace = exception.StackTrace } elasticClient.Client.Index (entry);}
You also need to implement the BeginScope and IsEnabled methods in addition.
For the purposes of this article, ignore BeginScope and return only null.
Update your constructor so that it receives a log level (LogLevel), and if it receives a log level greater than or equal to the log level in the constructor, implement IsEnabled and return true.
You may ask why classification is needed. This is a string that identifies what type of log it is. By default, each time an instance of ILogger is injected, the category is assigned the category name of T by default. For example, getting ILogger and using it to record certain events means that these events will have the name "MyController".
This may come in handy, such as setting different log levels for different classes to filter / query logged events. I'm sure you may think of more uses.
The implementation of this class will be as follows:
Public class ESLoggerProvider: ILoggerProvider {private readonly IHttpContextAccessor _ httpContextAccessor; private readonly FilterLoggerSettings _ filter; public ESLoggerProvider (IServiceProvider serviceProvider, FilterLoggerSettings filter = null) {_ httpContextAccessor = serviceProvider.GetService (); _ filter = filter?? New FilterLoggerSettings {{"*", LogLevel.Warning}};} public ILogger CreateLogger (string categoryName) {return new ESLogger (_ httpContextAccessor, categoryName, FindLevel (categoryName));} private LogLevel FindLevel (string categoryName) {var def = LogLevel.Warning Foreach (var s in _ filter.Switches) {if (categoryName.Contains (s.Key)) return s.Value; if (s.Key = "*") def = s.Value;} return def;} public void Dispose () {}}
Finally, we create an extension method that can be used to register our log provider in the startup class:
Public static class LoggerExtensions {public static ILoggerFactory AddESLogger (this ILoggerFactory factory, IServiceProvider serviceProvider, FilterLoggerSettings filter = null) {factory.AddProvider (new ESLoggerProvider (serviceProvider, filter)); return factory;}} public void Configure (IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) {loggerFactory.AddConsole (Configuration.GetSection ("Logging")) .AddDebug () .AddESLogger (app.ApplicationServices, new FilterLoggerSettings {{"*", LogLevel.Information}}) ... }
Notice how I override the default settings and categorize records by log level. In this way, we can easily index some events for each request.
Visualization of data in Kibana
Now that we have recorded events in Kibana, let's explore data visualization!
First, rebuild the index in Kibana, this time making sure to select "Index contains time-based events (Index contains time-based events)" and select the field dateTime as "Time-field name (time field name)".
Next, start your application and browse some pages to get some event logs. You can also add code that throws an exception at any endpoint so that we can see the recorded exception data.
After that, go to the Discover page of Kibana, where you can see multiple events sorted by the "dateTime" field (by default, the data is filtered to the last 15 minutes, but you can change it in the upper right corner):
Events recorded in Kibana visualization
Try typing "exception" in the search bar and notice any events that contain "exception" in the parsed text field. Then try to search for a specific type of exception (remember we used a keyword field! ).
You can also try searching for specific URL, such as "/ Home/About" and "/ Home/About" paths. You will notice that the first case includes events where the referrer is "/ Home/About", while the second case can only correctly return events with a path of "/ Home/About".
Once you are familiar with the data and how to query it, you can use the data to create some interesting graphics.
First, we will create a chart showing the number of exceptions recorded per minute.
Go to the visualization (Visualize) page of Kibana and create a new vertical bar chart (Vertical bar chart).
Select the Y axis as the count and the X axis as the date histogram.
Set the interval to every minute, and finally add a filter "hasException:true" to the search box.
A great chart showing the number of exceptions recorded per minute:
Number of exceptions recorded per minute
Next, display the number of messages logged over time for each category, limited to the first five category:
Go to the visualization (Visualize) page of Kibana and create a new line graph (Line chart).
Again, select the Y axis as the count, the X axis as the date histogram, and dateTime as the field at intervals of every minute.
Now add a sub-bucket and select "split lines". Use "significant terms" as the aggregate, and category as the field, with 5 units.
This will draw a chart similar to the following:
With the passage of time
Try adding some filters to the search box and see how it affects the results.
Finally, we add another chart, and we will see the first five messages that appear the most and the first five categories messages.
Go to the Visualize page of Kibana and create a new pie chart (Pie chart).
Select the count of the Y axis as before
Now, take "Terms" as an aggregate, "category" as a field, quantity as a unit, limit the first five, and draw a chart.
Then slice "Terms" as an aggregate, "message.keyword" as a field, and quantity as a unit, limiting the first five.
Once you have these settings, you will see a chart similar to this one:
The most common messages in each category
Take the time to look at the data (percentage, message/category is displayed on the chart element). For example, you will observe that the
The exception recorded by the DeveloperExceptionPageMiddleware class.
Conclusion
Elasticsearch is a powerful platform for data indexing and query. Although it is quite impressive in itself, it can be well analyzed, reported, and visualized in combination with other applications such as Kibana. As long as you start using it, you can get extraordinary results just by scratching the surface.
The official API for .net and .net Core,Elasticsearch has been overwritten because they support .net Standard 1.3 and later (they are still supporting 1.1).
As we have seen, it is convenient to use this API in an ASP.Net Core project, and we can easily use its REST API as storage, as well as as a log provider in the application.
Last but not least, I want you to use Docker. Try using Elasticsearch while thinking about what Docker can do for you and your team.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.