Skip to main content

Azure Search (Day 10 of 31)

List of all posts from this series: http://vunvulearadu.blogspot.ro/2014/11/azure-blog-post-marathon-is-ready-to.html


Short Description 
Azure Search is a search engine that is offered by Microsoft Azure as a service. This means that it is full managed by cloud infrastructure and can be used with success for full text search, type-ahead and faceted navigation.

Main Features 
Full text Search
Support full text search as any public search engine that is now on the market.
REST API
All the features and search capabilities are offered are a RESTfull service that can be queried easily. Available only over HTTPS.
Type-ahead
Based on user input, Azure Search can make recommendation of input search phrases, ‘predicting’ what a user wants to search.
Near Match
The search engine is smart enough not to search only for what user search for, but also for near match results. For example if someone search for “buy car”, the search engine can return results also for “buying car” and “buy cars”.
Faceted navigation
Allow user to navigate over search results items using different filters like date, category, type and so on.
Integration with 3rd party controller
 It is easy to use and integrate with existing UI search controllers. In this way we don’t need to develop the UI components from scratch.
Scalable in two ways
When you need more search power capabilities, you can scale up adding more service replicas or more storage.
Push based mechanism
All index, searchable data and documents are pushed by clients to Azure Search. There is no support for crawlers or other type of ‘indexers’. This mean that clients needs to push all the changed directly to Azure Search. This can useful when you need to modify very fast the searchable collections.
Index data persisted in Azure Search
In the current implementation of Azure Search, all the data that is indexed is directly persisted in the search engine.
Fields
In Azure Search terminology, a field contains searchable data like product name, description, characteristics. Also scoring information or other data related to search engine will be added here.
Attributes
In Azure Search terminology, an attribute represents the operations that can be made over that item like full-text search, filters or facets.
Documents
As above, in Azure Search terminology, documents contains detailed data used by search engine to return results.
Scoring Profiles
Are used by Azure Search engine to return results based on programing scoring. For example you may want to return results related to BMW when someone search for ‘car’.
Security
A search is allowed to be made over Azure Search if the user that make the requests has an access token. Based on this token the user will be allowed to make the search.
Schema maintaining 
In search engine world, you cannot delete field that are not used anymore. Because of this for fields that you don’t need any more cannot be removed. This fields needs to be set to null. New fields can be added to the schema without any kind of problem (incremental schema).
Free using shared resources
If you only want to play with search engine and see what are the capabilities of it you can use it as a shared service deployment. In this case you will not have dedicated resources, but you can play with it and test different use cases.
Replication
Allows us to copy each index in multiple replicas. Each replicate of the standard Azure Search creates a new copy of indexes.
Partitions
You can partition your data and indexes. You have full control if you want to create replicas or your partitions.
OData Syntax
Queries can be constructed around OData syntax.
Document retrieving based on ID
You are allowed to retrieve documents using document ID. It can be used with success when you want to display a preview of the result.
Highlight hit support
Allow us to display a text highlight key that was found in the search results.
Supports geo-index
Azure Search had support for geographically indexes (GeographyPoint and GeographyPolygon).

Limitations 
One search request per index
You can have multiple indexes, but you can make search over only one index per each search request.
No web sites crawler
This feature could be useful for small websites and application where people would like to index their collection in a simple and less expensive way.
No Shared Access Signature
In this moment access is made based on admin API key. If you want to control user access to different documents you can do it by specified custom filters.
Partition size limits
Each partition can hold maximum 15M documents and each search engine can have maximum 12 partitions. This means that you can index maximum 180M documents in a search engine.
Search results size
Can contains maximum 1.000 documents and maximum 10 suggestions.

Applicable Use Cases 
Below you can find 4 use cases when I would use Azure Search engine:
e-Commerce application
I remember that some time ago I used Solar for a search engine solution of an e-Commerce application. It was not an easy job, because there were a lot of configurations that had to be made. In this moment I would use in this use case Azure Search directly.
Navigation support for web applications
I would use Azure Search to improve the navigation support in web applications. For example I would index all the content that is available in web application, allowing users to search for content in the web application.
Exam Results
In Romania for example we cannot search baccalaureate results based on person name or high school. This is a perfect use case when Azure Search could be very useful, especially because people usually search result more intensive for one or two weeks and in the rest of year the load on the server will be pretty down (replicas can be our best friend here :smile: )
Internal portals
Because access is limited and can be controlled based on API key, we can use search capabilities with success for internal portals. Only applications with API key will be allowed to make queries the content that you indexed.

Code Sample 
// Get all cars orders by price desc
GET /indexes/cars/docs?search=*&$orderby=price desc&api-version=2014-07-31-Preview

// Get the second page of the search result, where the page size is 20
GET /indexes/cars/docs?search=*&$skip=1&$top=20&api-version=2014-07-31-Preview

// Get only cars that contains 'BMW' in description
GET /indexes/hotels/docs?search=bmw&searchFields=description&api-version=2014-10-20-Preview

// Get only cars type field that contains 'BMW' in description
GET /indexes/hotels/docs?search=bmw&searchFields=description&select=type&api-version=2014-10-20-Preview

Pros and Cons 
Pros

  • Push based mechanism
  • Geographical index
  • Scalable
  • Easy to configure and managed

Cons

  • No web sites crawler
  • No tool for data ingest


Pricing 
When you calculate the pricing of Azure Search engine you should take into account the fallowing data:

  • Search Units
  • Outbound data transfer


Conclusion
Azure Search engine can be a good option for us if we need to integrate a search engine that can be scaled easily and easy to manage and control. You should take a look over this search engine and see how simple is to use.
Even if you have a limit of maximum number of document that you want to index, you should stay calm. 90M indexed documents it is already a big number.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP