Tuesday, January 29, 2013

When an annotation processor does not process...

In RHQ we have the rest-docs-generator that takes annotations from the code and turns them into xml. As you may have guessed from the last sentence this is implemented as a Java annotation processor and used to work quite well.

Yesterday I wanted to run it on the latest version of the code (we don't run it on every build, as it takes some time with all the backend processing) and it failed. Looking at the processor itself and its test runs did not reveal anything, as they continued to work.

As we use the maven processor plugin, I thought this may fail because we now use Java7 to build (but then I used that before too) and upgraded the plugin, but that did not help. In the end I switched to the maven compiler plugin, which spit out a ton of errors and stacktraces. It turned out, that one of the classes on the classpath had an unsatisfied dependency and the annotation processor just "died" silently before.

After adding the dependency, the errors were gone, but the Processor.init() method was still not called and no processing happened. Looking through tons of output I found this:


Processor <hidden to protect the innocent> matches
[javax.persistence.PersistenceContext,
[……]
javax.ws.rs.Consumes,
javax.interceptor.Interceptors, com.wordnik.swagger.annotations.Api,
javax.ejb.Startup] and returns true.

This "returns true" together with the list of annotations I am interested in means that this other processor that is now in the classpath (probably pulled in when we switched from as4 to as7) swallows all those annotations, so that they are not passed to our processor.

The solution in my case was to explicitly name the rest-docs-generator in the compiler plugin configuration and not to rely on the auto-discovery (I did that in the processor plugin already, but it looks like this had no effect in my case) :


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<annotationProcessors>
<processor>org.rhq.helpers.rest_docs_generator.ClassLevelProcessor</processor>
</annotationProcessors>

<proc>only</proc>
<compilerArguments>
<AtargetDirectory>${project.build.directory}/docs/xml</AtargetDirectory>
<AmodelPkg>org.rhq.enterprise.server.rest.domain</AmodelPkg>
<!-- enable the next line to have the output of the processor shown on console -->
<!--<Averbose>true</Averbose>-->
</compilerArguments>
<!-- set the next to true to enable verbose output of the compiler plugin -->
<!--<verbose>false</verbose>-->
</configuration>
[…]


Note that to find the above "Processor .. matches .." output, the compiler plugin must be set to verbose.

I meanwhile also heard that a newer version of that "rouge" processor only looks after its annotations now.

Monday, January 28, 2013

Recap of RHQ @ LJCs first Meet a Project event

RHQ-logoLJC logo

Last thursday I was in London at the London Java Community (LJC) first "Meet a project" event.

Getting there started with an aborted take-off in Stuttgart. The plane accelerated on the runway and then all of a sudden did a full breaking. We exited onto the movement area, parked there for some minutes and then circled back on to the runway to finally take off. But anyway, I arrived safe and early enough in London.

The "Meet a Project" (MaP) event was held at the University College London campus. When I arrived a few attendees were already there and soon after Barry, the organizer joined too. We started in one room to explain "the rules" and then split into three rooms.

Me in the very last session of 6

Myself talking at the back table in the last of the six sessions.


To explain how MaP works, one can think of "speed dating for projects". There were six projects present and thus six groups have been formed, each sitting around a table. The project ambassadors (like myself) then spend 15 mins per table to present the project, explain about open source in general and give hints where and how the attendee may get involved and help the project.

As I did not know what to expect (and as this was the first incarnation of the MaP, no one really did), I created a small slide show to explain about RHQ, and had a handout prepared to give to interested attendees. For the individual sessions I always took the full 15mins. Almost all attendees were very interested and I distributed over 20 handouts.

After the sessions were over at around 8:30pm we went to a hotel bar to do some socializing, and then Manik, Davide, Sanne, Richard Warburton from jClarity and myself went to an indian restaurant to finally have dinner.

I cannot yet tell if the event was a success in the sense that RHQ project really got any new contributors. What I can tell is that the format of "speed dating for projects" felt really good to me, as with the small groups one could have an intensive session with direct feed back if the concepts were clear. With around 50 attendees I am happy to have given away 20 handouts. And while socializing a few attendees told me that they have never heard of RHQ before, and that has been good to hear about. And one lady switched tables to be able to listen to me before she had to leave earlier :)

Wednesday, January 23, 2013

Monitoring the monster

RHQ-logoJDF logo

The classical RHQ setup assumes an agent and agent plugins being present on a machine ("platform" in RHQ-speak). Plugins then communicate with the managed resource (e.g. an AS7 server); ask it for metric values or run operations (e.g. "reboot").

This article shows an alternative way for monitoring applications at the example of the Ticket Monster application from the JBoss Developer Framework.


The communication protocol between the plugin and the managed resource is dependent on the capabilities of that resource. If the resource is a java process, JMX is often used. In the case of JBoss AS 7, we use the DMR over http protocol. For other kinds of resources this could also be file access or jdbc in case of databases. The next picture shows this setup.

RHQ classic setup


The agent plugin talks to the managed resource and pulls the metrics from it. The agent collects the metrics from multiple resources and then sends them as batches to the RHQ server, where it is stored, processed for alerting and can be viewed in the UI or retrieved via CLI and REST-api.

Extending

The above scenario is of course not limited to infrastructure and can also be used to monitor applications that sit inside e.g. AS7. You can write a plugin that uses the underlying connection to talk to the resource and gather statistics from there (if you build on top of the jmx or as7 plugin, you don't necessarily need to write java-code).

This also means that you need to add hooks to your application that export metrics and make them available in the management model (the MBean-Server in classical jre; the DMR model in AS7), so that the plugin can retrieve them.

Pushing from the application

Another way to monitor application data is to have the application push data to the RHQ-server directly. In this case you still need to have a plugin descriptor in order to define the meta data in the RHQ server (what kinds of resources and metrics exist, what units do the metrics have etc.). In this case you only need to define the descriptor and don't write java code for the plugin. This works by inheriting from the No-op plugin. In addition to that, you can also just deploy that descriptor as jar-less plugin.

The next graphic shows the setup:

RHQ with push from TicketMonster


In this scenario you can still have an agent with plugins on the platform, but this is not required (but recommended for basic infrastructure monitoring). On the server side we deploy the ticket-monster plugin descriptor.

The TicketMonster application has been augmented to push each booking to the RHQ server as two metrics for total number of tickets sold and the total price of the booking (BookingService.createBooking()).


@Stateless
public class BookingService extends BaseEntityService<Booking> {

@Inject
private RhqClient rhqClient;

public Response createBooking(BookingRequest bookingRequest) {
[…]
// Persist the booking, including cascaded relationships
booking.setPerformance(performance);
booking.setCancellationCode("abc");
getEntityManager().persist(booking);
newBookingEvent.fire(booking);
rhqClient.reportBooking(booking);
return Response.ok().entity(booking)
.type(MediaType.APPLICATION_JSON_TYPE).build();


This push happens over a http connection to the REST-api of the RHQ server, which is defined inside the RhqClient singleton bean.

In this RhqClient bean we read the rhq.properties file on startup to determine if there should be any reporting at all and how to access the server. If reporting is enabled we try to find the platform we are running on and if the RHQ server does not know about it, create it. On top of the platform we create the TicketMonster instance. This is safe to do multiple times, as would be the platform creation - I am looking for an existing platform where an agent might perhaps already monitor the basic data like cpu usage or disk utilization.

The reporting of the metrics then looks like this:


@Asynchronous
public void reportBooking(Booking booking) {

if (reportTo && initialized) {
List<Metric> metrics = new ArrayList<Metric>(2);

Metric m = new Metric("tickets",
System.currentTimeMillis(),
(double) booking.getTickets().size());
metrics.add(m);

m = new Metric("price",
System.currentTimeMillis(),
(double) booking.getTotalTicketPrice());
metrics.add(m);

sendMetrics(metrics, ticketMonsterServerId);
}
}


Basically we construct two Metric objects and then send them to the RHQ-Server. The second parameter is the resource id of the TicketMonster server resource in the RHQ-server, which we have obtained from the create-request I've mentioned above.

A difference to the classical setup where the MeasurementData objects inside RHQ always have a so called schedule id associated is that in the above case we pass the metric name as is appears in the deployment descriptor and let the RHQ server sort out the schedule id.


<metric property="tickets" dataType="measurement"
displayType="summary" description="Total number tickets sold"/>
<metric property="price"
displayType="summary" description="Total selling price"/>


And voilĂ  this what the sales look like that are created from the Bot inside TicketMonster:

Bildschirmfoto 2013 01 23 um 11 16 23
Bookings in the RHQ-UI


The display interval has been set to "last 12 minutes". If you see a bar, that means that within the timeslot of 12mins/60 slots = 12sec, there were multiple bookings. In this case the bar shows the max and min value, while the little dot inside shows the average (via Rest-Api it is still possible to see the individual values for the last 7 days).

Why would I want to do that?

The question here is of course why would I want to send my business metrics to the RHQ server, that is normally used for my infrastructure monitoring?

Because we can! :-)

Seriously such business metrics are also able to indicate issues. If e.g. the number of ticket bookings is unusually high or low, this can also be a source of concern and warrants an alert. Take the example of E-Shops that sell electronics and where it happened that someone made a typo and offered laptops that are normally sold at €1300 and are now selling at €130. That news is quickly spread via social networks and sales triple over their normal numbers. Here monitoring the number of laptops sold can be helpful.

The other reason is that RHQ with its user concept allows to set up special users that only have (read) access to the TicketMonster resources, but not to other resources inside RHQ. This way it is possible to give the business people access to the metrics from monitoring the ticket sales.

Resource tree all resourcesResourcetree ticketmonster only user


On the left you see the resource tree below the platform "snert" with all the resources as e.g. the 'rhqadmin' user sees it. On the right side, you see the tree as a user that only has the right to see the TicketMonster server (™").

TODOs

The above is a proof of concept to get this started. There are still some things left to do:

  • Create sub-resorces for performances and report their booking separately
  • Regularly report availability of the TicketMonster server
  • Better separate out the internal code that still lives in the RhqClient class
  • Get the above incorporated into TicketMonster propper - currently it lives in my private github repo
  • Decide how to better handle an RHQ server that is temporarily unavailable
  • Get Forge to create the "send metric / … " code automatically when a class or field has some special annotation for this purpose. Same for the creation of new items like Performances in the ™ case.


If you want to try it, you need a current copy of RHQ master -- the upcoming RHQ 4.6 release will have some of the changes on RHQ side that are needed. The RHQ 4.6 beta does not yet have them.

Wednesday, January 16, 2013

RHQ 4.6 beta released


The RHQ team has been very busy since RHQ 4.5.1 (and actually already before that) and has switched the application server it uses to JBoss AS 7.1. Directly after the switchover we have posted a first alpha version.


Now after more work and fixes, we are happy to provide a beta version of RHQ 4.6, that has all the issues resolved that arose from the switch. Features of this release are

  • The internal app server is now JBossAS 7.1
  • GWT has been upgraded to version 2.5
  • There is a new installer (this has also changed since the 4.6 alpha release)
  • The REST-Api has been enhanced
  • Korean translations have been added (contributed by SungUk Jeon)


You can download the release from source forge.


This wiki document describes how to use the new installer.

The first version of the download did unfortunately not contain the Korean locale -- that is now fixed. If you already have downloaded the zip and do not need the Korean locale, then you don't need to re-download.

Please try the release and give us feedback, be it in Bugzilla, Mailing lists or the forum.

AlertDefinitions in the RHQ REST-Api


I have in the last few days added the support for alert definitions to the REST-Api in RHQ master.
Although this will make it into RHQ 4.6 , it is not the final state of affairs.

On top of the API implementation I have also written 27 tests (for the alert part; at the time of writing this posting) that use Rest Assured to test the api.

Please try the API, give feedback and report errors; if possible as Rest Assured tests, to increase the
test coverage and as an easy way to reproduce your issues.

I think it would also be very cool if someone could write a script in whatever language that exports definitions and applies them to a resource hierarchy on a different server (e.g from test to production)

Monday, January 14, 2013

Korean translations contributed to RHQ

Login screen


Thanks to SungUk Jeon we now have Korean translations of the RHQ ui. Those will first appear in the upcoming RHQ 4.6 release

Dashboard
Individual resource


If Korean ist not your default locale, you can switch to it by appending ?locale=ko to the url of the RHQ-ui just like http://localhost:7080/coregui?locale=ko.

Thanks a lot, SungUk

Thursday, January 10, 2013

Testing REST-apis with Rest Assured

The REST-Api in RHQ is evolving and I had long ago started writing some integration tests against it.
I did not want to do that with pure http calls, so I was looking for a testing framework and found one that I used for some time. I tried to enhance it a bit to better suit my needs, but didn't really get it to work.

I started searching again and this time found Rest Assured, which is almost perfect. Let's have a look at a very simple example:


expect()
.statusCode(200)
.log().ifError()
.when()
.get("/status/server");


As you can see, this is a fluent API that is very expressive, so I don't really need to explain what the above is supposed to do.

In the next example I'll add some authentication


given()
.auth().basic("user","name23")
.expect()
.statusCode(401)
.when()
.get("/rest/status");


Here we add "input parameters" to the call, which are in our case the information for basic authentication, and expect the call to fail with a "bad auth" response.

Now it is tedious to always provide the authentication bits throughout all tests, so it is possible to tell Rest Assured to always deliver a default set of credentials, which can still be overwritten as just shown:


@Before
public void setUp() {
RestAssured.authentication = basic("rhqadmin","rhqadmin");
}


There are a lot more options to set as default, like the base URI, port, basePath and so on.

Now let's have a look on how we can supply other parameters


AlertDefinition alertDefinition = new AlertDefinition(….);

AlertDefinition result =
given()
.contentType(ContentType.JSON)
.header(acceptJson)
.body(alertDefinition)
.log().everything()
.queryParam("resourceId", 10001)
.expect()
.statusCode(201)
.log().ifError()
.when()
.post("/alert/definitions")
.as(AlertDefinition.class);


We start with creating a Java object AlertDefinition that we use for the body of the POST request. We define that it should be sent as JSON and that we expect JSON back. For the URL, a
query parameter with the name 'resourceId' and value '10001' should be appended.
We also expect that the call returns a 201 - created and would like to know the details is this is not the case.
Last but not least we tell RestAssured, that it should convert the answer back into an object of type AlertDefinition which we can then use to check constraints or further work with it.

Rest Assured offers another interesting and built-in way to check constraints with the help of XPath or it's JSON-counterpart JsonPath:


expect()
.statusCode(200)
.body("name",is("discovery"))
.when()
.get("/operation/definition");


In this (shortened) example we expect that the GET-call returns OK and an object that has a body field with the name 'name' and the value 'discovery'.

Conclusion

Rest Assured is a very powerful framework to write tests against a REST/hypermedia api. With its fluent approach and the expressive method names it allows to easily understand what a certain call is supposed to do and to return.

The Rest Assured web site has more examples and documentation. The RHQ code base now also has >70 tests using that framework.

Wednesday, January 09, 2013

A small blurb of what I am currently working on

I have not yet committed and pushed this to the repo and it is sill fragile and most likely to change - and still I want to share it with you:


$ curl -i -u rhqadmin:rhqadmin -X POST \
http://localhost:7080/rest/alert/definitions?resourceId=10001 \
-d @/tmp/foo -HContent-type:application/json
HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
Location: http://localhost:7080/rest/alert/definition/10682
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 09 Jan 2013 21:41:10 GMT

{
"id":10682,
"name":"-x-test-full-definition",
"enabled":false,
"priority":"HIGH",
"recoveryId":0,
"conditionMode":"ANY",
"conditions":[
{"name":"AVAIL_GOES_DOWN",
"category":"AVAILABILITY",
"id":10242,
"threshold":null,
"option":null,
"triggerId":null}
],
"notifications":[
{"id":10432,
"senderName":"Direct Emails",
"config":{
"emailAddress":"enoch@root.org"}
}
]}


In the UI this looks like:

General view
Conditions
Notifications


Other features like dampening or recovery are not yet implemented.

To be complete, the content of /tmp/foo looks like this:


{
"id": 0,
"name": "-x-test-full-definition",
"enabled": false,
"priority": "HIGH",
"recoveryId": 0,
"conditionMode": "ANY",
"conditions": [
{
"id": 0,
"name": "AVAIL_GOES_DOWN",
"category": "AVAILABILITY",
"threshold": null,
"option": null,
"triggerId": null
}
],
"notifications": [
{
"id": 0,
"senderName": "Direct Emails",
"config": {
"emailAddress": "enoch@root.org"
}
}
]
}