The time series of JGB Interest Rate(10Y) over the last one year

World wide quantitative easing does not seem to end. I live and work in Japan which has the lowest interest rate.

Thanks to Quandl and RStudio, I can easily get the data of the interest rate and visualize it with R and dygraphs package!

You can understand how you download the data of the interest rate from Quandl and vizualize it as javascript graph(dygraphs) with R when you see the following document published in RPubs.

And also, You can download the entire content of the above document including R code from the following link:


4th Japan.R meeting on December, 6

We had an exciting meeting "Japan.R" with many talks on last Saturday.
(I should have made an advance notice.)

(This is the picture of our venue. Thanks FreakOut!!!)

We are so pleased that many participants(over 200!!!) enjoyed the talks and have a great relationship each other in pizza party.

The following link is the website for participants.

You can find the time table of Japan.R in the above web site:

13:30~13:45Opening Talk@gepuro, @0kayu
13:45~14:15Easy to create a choropleth map with choroplethr package@AriLamstein
14:15~14:45Data science ecosystem:Approach in Opt data science lab@shsaix
15:00~15:30Compare Deep Learning with the other classifiers with R@TJO_datasci
15:30~16:00Machine Learning @ FreakOut (仮)@yanaoki
16:15~17:15Language discussion(R, SAS, python, Julia, Excel(not language...))
17:30~18:30Lightning Talk festivals
18:30~20:00Pizza party

One of the greatest point is that we invited @Arilamstein who developed choroplethr package and he gave his talk!!! Thanks @Arilamstein!!!

You can also find the workshops reports of Japan.R in the following links:
The second link is very useful because you can see the presentation slide uploaded online.

The next Japan.R meeting will be hold next year and I will release advance notice next time.
Needless to say, everyone is welcome to join!

We are not sure that there are a lot of guys who want to join but can not read Japanese.
If you so, please let me know.
we will write English version agendas and info for next meeting.


Visualize violent crime rates in the US with choroplethr package

Visualize violent crime rates in different US States with choroplethr package I have read choroplethr package from the blog post Animated Choropleths in R a few days ago. As a another visualization tool in R language, I want to try this one.
To install the latest stable release(CRAN) type the following from an R console:
To install the development version using the devtools package from github, type the following::
install_github("choroplethr", "trulia")
It's not interesting for me to run just example codes written in choroplethr package. Instead, I used other data from rMaps package as a quick data source and visualize it!
Now we can use violent crime rates data in the US included in rMaps package.
We can create animated choropleths as the following page:
In my case, we just process the data and visualize it as the following simple code:
# load packages
# initialization list and get years from violent_crime data
choropleths = list()
# Get years for loop
years <- sort(unique(violent_crime$Year))
# convert to level data
violent_crime$Crime <- cut(violent_crime$Crime, 9)
# Create choropleth component.
for (i in 1:length(years)) {
    df <- subset(violent_crime, Year == years[i])
    # We need to change the column names for choroplethr function
    colnames(df) <- c("Year", "region", "value")
    # Cut decimal off df$value <- round(df$value)
    title <- paste0("Violent crime rates: ", years[i])
    choropleths[[i]] = choroplethr(df, "state", title = title)
# Vizualize it!
The result is published via Dropbox as the following (image)link.


Seamless analytical environment by WDI, dplyr, and rMaps

Recently I found that My R Guru @ramnath_vaidya is developping a new visualization package rMaps.

I was so excited when I saw it for the first time and I think that it's really awesome for plotting any data on a map.

Let me explain how we can

  • Get data(WDI package)
  • Manipulate data(dplyr package)
  • Visualize the result(rMaps package)

with greate R packages.

Except for rMaps package, you can install these packages(WDI, dplyr) from CRAN by usual way.

install.packages(c("WDI", "dplyr"))

To install rMaps package, you just write the following commands on R console.


(Don't forget to install “devtools” package to use install_github function.)

Now, as an example, I show you that

  • Get “CO2 emissions (kt)” data from World Bank by WDI package
  • Summarze it to by dplyr package
  • Visualize it by rMaps package

The result is shown below:


By the way, recently an Japanese R professional guy often posts his greate articles. I recommend you to see these articles if you are interested in visualizing and dplyr especially.

Source codes:

# Get CO2 emission data from World bank
# Data source : http://data.worldbank.org/indicator/EN.ATM.CO2E.KT/
df <- WDI(country=c("all"), 
          start=2004, end=2013)
# Data manipulation By dplyr
data <- df %.% 
  na.omit() %.%
  #Add iso3c format country code 
  mutate(iso3c=countrycode(iso2c, "iso2c", "iso3c")) %.% 
  group_by(iso3c) %.%
  #Get the most recent CO2 emission data
# Visualize it by rMaps
i1 <- ichoropleth(value~iso3c, data, map="world")
i1$show("iframesrc", cdn = TRUE) # for blog post
#... or you can direct plot by just evaluating "i1" on R console.


SABR calibration on Shiny

As you already know(if you often use R!!), Shiny allows us to create web applications in R easily. It doesn't require the knowledge of HTML, CSS, javascript, I mean, we don't need to write HTML, CSS and javascript directly to create web application.

The users of your web application can change the input parameters interactively via web applications. It will give a good opportunities to understand how the model works and it's limitation.

Systematic Investor's examples are very nice application in financial analysis, I was so excited when I saw it at first. I decided to create my own web application by shiny as a "mock" quant.

My appication is here(you can also get the source code of this application on Github):

It is a calibration of SABR model based on Hagan's approximation formula(Managing Smile Risk, P. Hagan et al(pdf)). In some derivative market, SABR model is de facto model, you can understand how much degree SABR model can fit market data(IV) and it's limit.


Pricing defaultable discount bond with reduced form model

I often use R language to write "prototype" program. As you know, It has very high productivity and smart grammar. In this article, I would like to show you how to write the program to evaluate the price of defaultable bond by "reduced-form model".

Before write a program, we need to understand how to price these bond.
Under risk neutral measure, we can evaluate the price of defaultable (discount) bond as below.

In this equation, I set variables as below.

  • v(t,T) : The price of defaultable bond  with maturity T at time t 
  • r(t) : Short rate at time t
  • delta_(t) : Recovery rate at time t
  • tau : default time
  • 1{} : Indicator function
And the second term in this equation means usual pay back of principal, on the other hand, the first term means the value you can get when this bond will default.

For montecarlo simulation, we need to model short rate and default intensity process, I modeled default intensity process and short rate as CIR model.

CIR model assume that some stochastic process obey below Stochastic differential equation.

Each variables have following meaning.

  • x(t) : The value of stochastic variable obeying CIR process at time t.
  • kappa : The speed of adjustment
  • theta : The mean of  stochastic variable
  • sigma : The volatility of  stochastic variable
  • W_t : Standard Brownian motion

And next, I set the simulation condition as below.

-------------------------------- Simulation condition -------------------------------
  • The number of path : 500
  • The number of grid per year : 250
  • Recovery rate : 0.7
  • Maturity of bond : 2
  • The correlation between interest rate and default intensity : 0.3
-------------------------------- Interest rate parameter ----------------------------
  • kappa : 0.6
  • theta : 0.05
  • sigma : 0.05
  • initial value : 0.05
-------------------------------- Default intensity parameter -------------------------
  • kappa : 0.6
  • theta : 0.2
  • sigma : 0.5
  • initial value : 0.2
After these settings, we can write simulation program. The entire of program is folloing that.
(Sorry for the comment written in Japanese, To tell the truth, this article is posted in other site in Japanese...)
(If you have any question, don't hesitate to ask me :))

The result of this program is about 0.830±0.005(It depends on the seed of random number generator).
Enjoy !


Changing world, Changing JGB term structure

Writing the article "How much does "Beta" change depending on time?", I learned how to create an animation by using R language. Then, I would like to continue do that in this article.
In this article, I visualize time series of JGB term structure Ministry of Finance Japan publishes because I'm japanese!
You can download these data from here. You can get daily yield curve data, but I visualized these as monthly data for simplicity.
You can easily understand how JGB term structure behave and Japanese yield gradually go down.
The source code to create it is below.
(To run, you need to install xts, animation package, and ImageMagick)
#Source of JGB curve
source.jgb <- NULL
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/data/jgbcm_1974-1979.csv"
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/data/jgbcm_1980-1989.csv"
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/data/jgbcm_1990-1999.csv"
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/data/jgbcm_2000-2009.csv"
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/data/jgbcm_2010.csv"
source.jgb[[length(source.jgb) + 1]] <- "http://www.mof.go.jp/jgbs/reference/interest_rate/jgbcm.csv"
#From Japanese era To ChristianEra
ToChristianEra <- function(x)
  era  <- substr(x, 1, 1)
  year <- as.numeric(substr(x, 2, nchar(x)))
  if(era == "H"){
    year <- year + 1988
  }else if(era == "S"){
    year <- year + 1925
#Down load yield curve and convert to xts object
GetJGBYield <- function(source.url)
  jgb <- read.csv(source.url, stringsAsFactors = FALSE)
  #Extract date only
  jgb.day  <- strsplit(jgb[, 1], "\\.")
  #stop warning
  warn.old <- getOption("warn")
  options(warn = -1)
  #From Japanese era To ChristianEra
  jgb.day <- lapply(jgb.day, function(x)c(ToChristianEra(x[1]), x[2:length(x)]))
  #From date string to date object
  jgb[,  1] <- as.Date(sapply(jgb.day, function(x)Reduce(function(y,z)paste(y,z, sep="-"),x)))
  #Convert data from string to numeric
  jgb[, -1] <- apply(jgb[, -1], 2, as.numeric)
  options(warn = warn.old)
#Down load JBG yield
jgb.list <- lapply(source.jgb, GetJGBYield)
#convert one xts object
jgb.xts <- Reduce(rbind, jgb.list)
#Interpolate(nearest value)
coredata(jgb.xts) <- na.locf(t(na.locf(t(coredata(jgb.xts)))))
#to monthly
jgb.xts <- jgb.xts[endpoints(jgb.xts, on="months",k = 1)]
#Label for x-axis
label.term <- paste(gsub("X", "", colnames(jgb.xts)), "Y", sep="")
#The range of y 
y.max <- c(min(jgb.xts), max(jgb.xts))
#plot one image
Snap <- function(val){
  term.structure <- coredata(val)
  index.date     <- index(val)
  plot(t(term.structure),type="l",lwd=3, col = 2, xlab = "Term", ylab = "Rate", ylim = y.max)
  axis(1, 1:length(label.term), label.term)
  text(0.5, y.max[2], as.character(index.date), pos = 4)
#save as animation
  for(i in 1:nrow(jgb.xts)){Snap(jgb.xts[i])}
},interval = 0.005)