elasticsearch is an open source distributed RESTful search engine built on top of Apache Lucene.
Like any service or component in your architecture, you’ll want to monitor it to ensure that it’s available and gather performance data to help with tuning.
In this brief post, we’ll look at how we can monitor elasticsearch using Opsview, which is built on Nagios and thus has access to a wide range of plugins, yet provides a more approachable user interface for configuring service checks.
The rest of the article assumes that you’ve got Opsview (or the Opsview VMWare appliance) installed & have completed the Quick Start.
We’ll install the plugin from https://github.com/rbramley/Opsview-elasticsearch into /usr/local/nagios/libexec/
The check_elasticsearch plugin is developed using Perl, so that it can be contributed back to Opsview. It requires the CPAN JSON module (sudo cpan -i JSON).
The plugin includes usage instructions, check_elasticsearch -h which can also be viewed in Opsview by selecting the ‘Show Plugin Help‘ link beneath the Plugin drop down.
Service check setup
Figure 1 gives an overview of service check configurations.
Figure 1 – Check definitions overview
The checks in action
The check results shown in Figure 2 are visible by navigating through the host group hierarchy.
Figure 2 – service check results
Note: They’re showing as warning because the checks were run against a standalone instance rather than a cluster.
The current checks are based on the Cluster Health API, the intention is to add stats/status checks too that will take threshold criteria and output performance data. The code for the check is on GitHub at https://github.com/rbramley/Opsview-elasticsearch so feel free to fork & send pull requests.
This is a quick how-to for Opsview users who need to monitor an OpenStack (Essex) Swift installation. As a starting point we’ll perform a ‘front door’ check as this should work no matter what Swift implementation you are using.
A few months back I was working on a Grails & Solr project – so it was a prime opportunity to answer my call to action (see the Solr plugin section of ‘Using Lucene in Grails’) and upgrade the Solr plugin to 3.5.0.
This was done in two phases, the first was to update the SolrJ client as required by the project and then once the project was completed to update the bundled Solr server with the aim of contributing the update back.
TLDR: Grails Solr Plugin using
3.5.0 3.6.0 available from https://github.com/rbramley/grails-solr-plugin pending https://github.com/mbrevoort/grails-solr-plugin/pull/2
Posted in Notes, Search
Tagged grails, solr
Apache Mahout is a scalable machine learning framework that can be used to create intelligent applications. In this article we’ll see how Mahout can be used to create personalised recommendations within a Grails application.
This article originally appeared in the February 2012 edition of GroovyMag.
The ‘As a <user type>, I want to <action to be performed>, so that <business benefit>‘ user story structure encourages good requirements but can often be abused!
If you’re tasked with writing user stories or requirements, then I suggest that you read the Open Unified Process documentation guidance on ‘Writing good requirements‘:
- Define one requirement at a time.
- Avoid conjunctions (and, or) that make multiple requirements.
- Avoid let-out clauses or words that imply options or exceptions (unless, except, if necessary, but).
- Use simple, direct sentences.
- Use a limited (500-word) vocabulary, especially if your audience is international.
- Identify the type of user who needs each requirement.
- Focus on stating what result you will provide for that type of user.
- Define verifiable criteria.
(see the OpenUP wiki link for the introduction and examples)
Or rather: numbering schemes with respect to grouping and ordering
For traceability it is critical for a requirement to have a unique identifier. No arguments on that front, but the challenge comes when people get to choose their own numbering scheme for requirements.
- First requirement
- Second requirement
- Third requirement
All good so far you might think – it’s exactly the way they be numbered if I’d entered them in an issue tracker.
Well let’s suppose that at the first pass that the list is in a spreadsheet, was collated by functional area and contains over 50 items.
It can easily go awry on subsequent iterations (e.g. after review cycles, prioritisation meetings etc.) when people haven’t written good requirements (see above) or you have epics that need splitting. At this point a system would append a row which is given a sequentially generated primary key, whereas with a spreadsheet the natural inclination is to insert a row into the table which then introduces the numbering dilemma.
There are 3 common behaviours:
- Generations of people trained in using numbered headings in documents will typically go with the nesting instinct e.g. 2.1.3
- Information workers who’ve dealt with numbering schemes such as Dewey Decimal might have used non-contiguous numbering in the first place e.g. start each new functional area at the next 100 – which may prevent or just defer the issue
- Data modellers may opt for the foreign key approach for splits and use other columns for grouping/sorting.
I’d encourage the latter approach, append to the table then re-sort later and don’t get hung up on having beautifully ordered reference numbers. After all you’re not doing Big Requirements Up Front are you?
“Adding power makes you faster on the straights. Subtracting weight makes you faster everywhere” – Colin Chapman
Think about the implications of this philosophy when writing and prioritising requirements!
This is a quick how-to post for Opsview users who have a need to monitor MarkLogic.
Apache Lucene is the leading open source search engine and is used in many businesses, projects and products. Lucene has sub-projects which provide additional functionality such as the Nutch web crawler and the Solr search service. This article gives an introduction to Lucene, a tutorial on three Grails Lucene plugins and a comparison between them.
This article originally appeared in the September 2011 edition of GroovyMag.