Starting with Microprofile & Vaadin & JNoSQL

Hello, y’all.

Let’s see how to develop a super-hipster Java Web App. This will be a long-running post, evolving as time goes, using technologies that are solving problems I’ll face. Please follow along and ask questions as they come!

We’ll start with a monolith from 2015: access to a NoSQL database, business logic and a web UI in a single deploy/release/file.

As you can guess from the title, I’ll already sprinkle some hipsterness to this Web app microlith. I’ll use:

  • For NoSQL Access, JNoSQL
  • For the business logic, CDI (coming from Wildly Swarm) 1.2
  • For the UI layer, Vaadin 8 (because I love not having to learn HTML/CSS/JSF to develop good functional Web UIs)

Life’s better with a scenario, so let’s pretend we’re developing a database of programming languages and will store their name, main link and allow visitors to comment about the languages.

So let’s code!

The initial commit contains the basic CDI + Vaadin dependencies from the Vaadin examples. Starting simple, we’ll now create the entities and create the database object (for now, in memory only).

We need a way to interact with the database, so I created a simple Web UI to add new data. Currently it only adds Languages; we’ll deal with comments later.

So, now we have CDI + Vaadin working. You can run this project with mvn clean wildfly-swarm:run and package it into a überjar with mvn clean package.

At this point, NoSQL is still missing in our app. I don’t want to create the NoSQL access with a proprietary API because I know that my initial choice of NoSQL database may not be the best for my use case. There are so many databases (more than 255) that I (still) don’t know where to start. So, I’ll use JNoSQL to model and connect my app to a NoSQL database. For now, I’ll use MongoDB as database just because, no real reason.

To start a new MongoDB instance using Docker, just run docker run -d –name mongodb-instance -p 27017:27017 mongo

Some changes are necessary to our model, to our DAO and to our UI to support using the Database, but the result is neat!

That’s it for now: started with CDI + Vaadin, added JNoSQL support. There are still many ways to go so this is super-hipster:

  • Break UI and Backend into Microservices talking over
    • REST
    • Kafka
    • gRPC?
  • Do away with DAO and use a Repository instead
  • Handle comments!
  • Add events to notify concurrent users of new languages
  • Launch Wildfly Swarm inside/as a Docker instance
  • … And many more!

What would you like to see next? Let me know!

Regards,

Leo.

Advertisements

Streams on Java 7

One of the “cool features” of Java 8 is Streams. It took a while for me to use it, because I deal mostly with legacy code, but as new projects came along, I got to use Java 8 and, hence, I got used to streams + lambdas to write easy to read code.

Unfortunately, much of my coding days are still in Java 7 – either because clients are still on JVM 7, or because we have to develop new functionalities on top of legacy code (that is Java 7).

In parallel, as part of our Collections.compare talk, I got to know of Eclipse Collections, which contains collections and tools I’d normally get with Guava or Apache Collections. Because I’m all about trying out different libraries, on recent new projects where I needed some extra collection (like BiMap or Multimap), I tried the Eclipse Collections version.

It has some particularities that made me strange it in the beginning – like its interfaces are separated between mutable and immutable, and unlike Java, the default is immutable (so the Multimap interface is immutable, and MutableMultimap is, of course, mutable), but that’s not what this post is about.

I found that it has a very good support for Streams in its collections, even in its Java 7 release!

It starts with the addition of Functions and Predicates, that let you emulate lambdas in Java 7. If you need a field from an object (that you’d use Person::name with Java 8), you can create a public static final Function<Person, String> that would extract that field from your object.

Then, just use them in your ECollections! For example, if you have a MutableList<Person> people, and want that sorted by name in reverse alphabetic order, but only younger than 45 and printed to System.out, you can have 

people.sortThisBy(Person.NAME).asReversed().select(Predicates.attributeLessThan(Person.AGE, 45)).each(SYSOUT_PROCEDURE);

This totally feels like streams and lambda and EC has all the factories for Predicates and Functions to get you started.

I’m still a newbie in EC 7. I like that it enabled me to write better code using Java 8 style in Java 7, and what’s even more helpful: when we want to migrate this code to Java 8, EC8 has these interfaces extending the Java ones – so it will be totally compatible with Java 8 streams and lambdas!

So, even if you are stuck in the “past” with Java 7, you can jump into Java streams!

Java serialization options (1 of 4)

Java has plenty of serialization options. I tested some to see how well they’d serialize a class that represents a reading and how big it is.

I’m looking for some things: size of the final byte array, as this impacts network costs, speed of serialization/deserialization, compatibility with other systems and ease of development. This entry will look into the final size on wire. And there’s no way any text protocol is better than a binary one for size.

Continue reading “Java serialization options (1 of 4)”

Graphstream – a Java Graph API

I stumbled upon Graphstream while looking for a Graph (not to be confused with Chart) API for Java. My use case was simple: mesh network analysis and planning: how nodes in a given network are, who can see each other and how strong is the mesh, or how many paths  there are for a given node.

Such use cases have the “smell” of graphs: the network nodes are your graph Nodes, the network link between the network nodes are the graph Edges, and you can then do a lot of analysis: calculate critical paths and nodes that are central to the mesh network – such nodes are good candidates as routing/gateway points.

Graphstream is very flexible: you create Nodes and Edges using the default implementation or your own factories. Nodes and Edges have attributes and some have special meanings, like “xy” is a double[] that can be used for positioning the nodes when visualizing the graph (and Graphstream has a very good visualization API). But you’re really free to do a whatever with your graph. They also have lots of algorithms for graphs already implemented (which saved me some time!)

The API has lots of methods to iterate over nodes and edges, and this is a boon to use with Java 8 Streams API. But this also shows one of the shortcomings I found in it: the API iterations are not so much thread safe: because it propagates changes (like a new attribute value) using non-synchronized queues, you can’t use parallelStream() – stick to stream().

Another shortcoming that had me scratching my head is how to remove nodes and edges: you can do so using strings (the ID) or passing an object. But using an object actually tries to use an internal number that may not remove what you want… So keep to strings. One thing that I normally do is have a nice Stream going on. For example, removing edges that are not critical to the network (so they have the critical attribute set to false):

merged.getEdgeSet().stream().filter(e -> Boolean.FALSE.equals(e.getAttribute("critical"))).map(Edge::getId).collect(Collectors.toList()).forEach(id -> merged.removeEdge(id));

Ideas I have to expand this framework are to interact with NoSQL Graph Databases (which would allow graphs with millions loads with TBs of RAM…) and also disable (or make it optional) its event propagation system (because normally my graphs are static). More advanced, modify it to be more parallel in its calculations!

Let me know if you’re interested in more Graph examples and I can publish more!

Reladomo: “not your typical [JPA] ORM”

During JavaOne 2016, Goldman Sachs open sourced their internal ORM, named Reladomo. Also, they released katas for Reladomo as well. As I’d never done katas before, I though “well, might as well do it for something new!”.

The kata comes with simple slides to introduce the framework and its concepts. It opens with a list of its caracteristics:

  • “Reladomo, not your typical ORM”
  • Chaining logic
  • Object oriented, compiled, type-checked, query language
  • Transparent multi-schema support
  • Object oriented batch operations
  • Unit testable code
  • Flexible object relationship inflation

Just by this list, one can ask “well, what’s not typical about it”? But following the slides and kata exercises, you get a good glimpse of why this is not your typical ORM.

Most of my experience is with Hibernate (and later, JPA 1 in Java EE 5), so as I went along I got a lot of “wait what?” thoughts that explains the “not so typical” definition. They were most related to Entity definition using XML only (“aren’t we in the age of @Annotations by now?”), code generation (“how can we refactor stuff with a simple Rename in Eclipse?”) and the way Transactions work (“Anonymous classes for transaction definition??”).

But overall, while different than JPA in many, many ways, I liked that we don’t rely on JPQL strings for defining queries (“how many times I could refactor easily in Eclipse and have to search for JPQLs referring to the old names??”), the Finder objects are handy and some other things that I wanted to do in JPA like control object inflation, auditing and time series are already there.

The kata got me interested enough in the framework to give it a try in a real-world project and see how it really can (or can’t?) improve our coding experience – and I shall write about it!

 

JSR363 by Example: IoT Temperature Sensor

I, Otavio and Werner did a presentation in JavaOne 2016 about JSR 363, the JSR we worked on for the last 2 years. It is final, so it means the world can use it and the implementations in real projects!

To help you get started, I did a quick demonstration project on how an IoT device would use the API.

This project implements a Temperature Sensor enabled by Intel Edison. Two actual sensors are supported: the internal temperature sensor for the Intel processor and the Omega RH USB Sensor. The sensors are represented by the TemperatureSensor interface, which exposes a single method to read the sensor and get a MeasurementRecord object.

MeasurementRecord is a basic object for IoT devices: they all have an id (String sensorId) and capture data (Quantity measurement)  at a given timestamp (ZonedDateTime time). Every IoT project has that, and before JSR363 that data would be a double, int, or another primitive value. Which made it difficult to avoid bugs when working with more than one unit, for different sensors. You had to guess or rely on Javadoc to know if the data in question was a Temperature in Celsius or Active Power (W)…

The main class is MainEdison and reads from a sensor and posts the data as JSON POST in given URL. Hence, this main class needs 2+ arguments: the URL to post the reading (in full URL format) and the name(s) of thermal zones to read from the processor (you can try thermal_zone1 for starter).

I used Maven to make a “fat jar”, so it’s easier to run. So, just “maven package” your project and it should be good to go!

Take a look at the project and please ask questions! I’ll come back to write more about JSR363 and explore more about its APIs and how to use them.

See you next time!

Leo.