Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Jump into Java microframeworks, Part 3: Spark

Matthew Tyson | Jan. 8, 2016
An extra lightweight, flexible, and scalable architecture for single-page web apps.

Spark makes fewer assumptions than the other microframeworks introduced in this short series, and is also the most lightweight of the three stacks. Spark makes pure simplicity of request handling, and it supports a variety of view templates. In Part 1 you set up a Spark project in your Eclipse development environment, loaded some dependencies via Maven, and learned Spark programming basics with a simple example. Now we'll extend the Spark Person application, adding persistence and other capabilities that you would expect from a production-ready web app.

Download Java 8

Recall that we're using Spark 2 for the example application. Since Spark 2 doesn't support any Java version below Java 8, you'll need to install the most recent Java update in order to follow the examples.

Data persistence in Spark

If you followed my introduction to Ninja, then you'll recall that Ninja uses Guice for persistence instrumentation, with JPA/Hibernate being the default choice. Spark makes no such assumptions about the persistence layer. You can choose from a wide range of options, including JDBC, eBean, and JPA. In this case, we'll use JDBC, which I'm choosing for its openness (it won't limit our choice of database) and scalability. As I did with the Ninja example app, I'm using a MariaDB instance on localhost. Listing 1 shows the database schema for the Person application that we started developing in Part 1.

Listing 1. Simple database schema for a Spark app



create table person (first_name varchar (200), last_name varchar (200), id int not null auto_increment primary key);



CRUD (create, read, update, delete) capabilities are the heart of object-oriented persistence, so we'll begin by setting up the Person app's create-person functionality. Instead of coding the CRUD operations straightaway, we'll start with some back-end infrastructure. Listing 2 shows a basic DAO layer interface for Spark.

Listing 2. DAO.java interface



public interface DAO {

	public boolean addPerson(Map<String, Object> data);

}



Next we'll add the JdbcDAO implementation. For now we're just blocking out a stub that accepts a map of data and returns success. Later we'll use that data to define the entity fields.

Listing 3. JdbcDAO.java implementation



public class JdbcDAO implements DAO {

	@Override

	public boolean addPerson(Map<String, Object> data) {

		return true;

	}

}



We'll also need a Controller class that takes the DAO as an argument. The Controller in Listing 4 is a stub that returns a JSON string describing success or failure.

Listing 4. A stub Controller



import org.mtyson.dao.DAO;

public class Controller {

	private DAO dao;

	public Controller(DAO dao) {

		super();

		this.dao = dao;

	}

	public String add(String type){

		Map<String, Object> data = new HashMap<String, Object>();

		if (dao.addPerson(data)){

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}

}



 

1  2  3  4  5  6  7  8  Next Page 

Sign up for CIO Asia eNewsletters.