Posts

In this “how-to” post, I want to detail an approach that others may find useful for converting nested (nasty!) json to a tidy (nice!) data.frame/tibble that is should be much easier to work with. 1 For this demonstration, I’ll start out by scraping National Football League (NFL) 2018 regular season week 1 score data from ESPN, which involves lots of nested data in its raw form. 2 Then, I’ll work towards getting the data in a workable format (a data.

CONTINUE READING

Introduction Much discussion in the R community has revolved around the proper way to implement the “split-apply-combine”. In particular, I love the exploration of this topic in this blog post. It seems that the “preferred” approach is dplyr::group_by() + tidyr::nest() for splitting, dplyr::mutate() + purrr::map() for applying, and tidyr::unnest() for combining. Additionally, many in the community have shown implementations of the “many models” approach in {tidyverse}-style pipelines, often also using the {broom} package.

CONTINUE READING

Introduction As a follow-up to a previous post about correlations between Texas high school academic UIL competition scores and SAT/ACT scores, I wanted explore some of the “alternatives” to joining the two data sets—which come from different sources. In that post, I simply perform a an inner_join() using the school and city names as keys. While this decision ensures that the data integrity is “high”, there are potentially many un-matched schools that could have been included in the analysis with some sound “fuzzy matching”.

CONTINUE READING

Two awesome things inspired this post: {ggplot2}’s version 3.0 release on CRAN, including full support for the {sf} package and new functions geom_sf() and coord_sf(), which make plotting data from shapefiles very straightforward. Jonas Scholey’s blog post discussing the use of “bubble grid” maps as an alternative to choropleth maps, which seem to be used more prevalent. As Jonas implies, using color as a visual encoding is not always the best option, a notion with which I strongly agree.

CONTINUE READING

Introduction I wanted to do a follow-up on my series of posts about Texas high school University Interscholastic League (UIL) academic competitions to more closely evaluate the relationship between the school performance in those competitions with school-wide SAT) and ACT scores. For those who may not be familiar with these tests, these are the two most popular standardized tests used for college admission in the United States. In my introduction to that series, I stated the following: School-wide … scores on state- and national-standardized tests (e.

CONTINUE READING

.toggle { height: 1.85em; overflow-y: hidden; } .toggle.open { height: auto; } $(“.toggle”).click(function() { $(this).toggleClass(“open”); }); Show-- NOTE: This is part of a series of write-ups discussing my findings of Texas high school academic University Interscholastic Scholarship (UIL) competitions. To keep this and the other write-ups concise and to focus reader attention on the content, I have decided not to show the underlying code (especially that which is used to create the visuals).

CONTINUE READING

Competition Participation Some of the first questions that might come to mind are those regarding the number of schools in each level of competition (District, Region, and State) and each conference classification level (1A, 2A, … 6A). It seems fair to say that the distribution of schools among Districts, Regions, and Conferences is relatively even. 1 2 This is to be expected since the UIL presumably tries to divide schools evenly among each grouping (to the extent possible) in order to stimulate fair competition.

CONTINUE READING

Let’s take a look at individual competitors in the academic UIL competitions. Individual Participation The first question that comes to mind is that of participation–which individuals have competed the most? NOTE: To give some context to the values for individual participants, I’ll include the numbers for myself (“Elhabr, Anthony”) in applicable contexts. rnk name school city conf n 1 Jansa, Wade GARDEN CITY GARDEN CITY 1 57 2 Chen, Kevin CLEMENTS SUGAR LAND 5 56 3 Hanson, Dillon LINDSAY LINDSAY 1 53 4 Gee, John CALHOUN PORT LAVACA 4 47 5 Zhang, Mark CLEMENTS SUGAR LAND 5 47 6 Robertson, Nick BRIDGE CITY BRIDGE CITY 3 46 7 Ryan, Alex KLEIN KLEIN 5 46 8 Strelke, Nick ARGYLE ARGYLE 3 45 9 Niehues, Taylor GARDEN CITY GARDEN CITY 1 44 10 Bass, Michael SPRING HILL LONGVIEW 3 43 1722 Elhabr, Anthony CLEMENS SCHERTZ 4 13 Note: 1 # of total rows: 123,409

CONTINUE READING

Having investigated individuals elsewhere, let’s now take a look at the schools. NOTE: Although I began the examinations of competitions and individuals by looking at volume of participation (to provide context), I’ll skip an analogous discussion here because the participation of schools is shown indirectly through those analyses.) School Scores Let’s begin by looking at some of the same metrics shown for individual students, but aggregated across all students for each school.

CONTINUE READING

There’s a lot to analyze with the Texas high school academic UIL data set. Maybe I find it more interesting than others due to my personal experiences with these competitions. Now, after examining some of the biggest topics associated with this data–including competitions, individuals, and schools–in a broad manner, there are some other things that don’t necessarily fall into these categories that I think are worth investigating. Siblings Let’s look at the performance of siblings.

CONTINUE READING

Don’t Repeat Yourself (DRY) Probably everyone who has done some kind of programming has heard of the “Don’t Repeat Yourself” (DRY) principle. In a nutshell, it’s about reducing code redundancy for the purpose of reducing error and enhancing readability. Undoubtedly the most common manifestation of the DRY principle is the creation of a function for re-used logic. The “rule of 3” is a good shorthand for identifying when you might want to rethink how your code is organized– “You should consider writing a function whenever you’ve copied and pasted a block of code more than twice (i.

CONTINUE READING

I’ve experimented with the {flexdashboard} package for a couple of things after first trying out not so long ago. In particular, I found the storyboard format to be my favorite. I used it to create the storyboard that I wrote about in a previous post about tracking the activity of NBA team Twitter accounts. I also used {flexdashboard} for a presentation that I gave at my company’s data science group.

CONTINUE READING

The Problem I have a bunch of data that can be categorized into many small groups. Each small group has a set of values for an ordered set of intervals. Having observed that the values for most groups seem to increase with the order of the interval, I hypothesize that their is a statistically-significant, monotonically increasing trend. An Analogy To make this abstract problem more relatable, imagine the following scenario.

CONTINUE READING

NOTE: This write-up picks up where the previous one left off. All of the session data is carried over. Color Similarity Now, I’d like to evaluate color similarity more closely. To help verify any quantitative deductions with some intuition, I’ll consider only a single league for this–the NBA, the league that I know the best. Because I’ll end up plotting team names at some point and some of the full names are relatively lengthy, I want to get the official abbreviations for each team.

CONTINUE READING

When working with the ggplot2 package, I often find myself playing around with colors for longer than I probably should be. I think that this is because I know that the right color scheme can greatly enhance the information that a plot portrays; and, conversely, choosing an uncomplimentary palette can suppress the message of an otherwise good visualization. With that said, I wanted to take a look at the presence of colors in the sports realm.

CONTINUE READING

I just wrapped up a mini-project that allowed me to do a handful of things I’ve been meaning to do: Try out the {flexdashboard} package, which is supposed to be good for prototypying larger dashboards (perhaps created with {shinydashboard}. Test out my (mostly completed) personal {tetext} package for quick and tidy text analysis. (It implements a handful of the techniques shown by David Robinson and Julia Silge, in their blogs and in their Tidy Text Mining with R book.

CONTINUE READING

I’m always intrigued by data science “meta” analyses or programming/data-science. For example, Matt Dancho’s analysis of renown data scientist David Robinson. David Robinson himself has done some good ones, such as his blog posts for Stack Overflow highlighting the growth of “incredible” growth of python, and the “impressive” growth of R in modern times. With that in mind, I thought it would try to identify if any interesting trends have risen/fallen within the R community in recent years.

CONTINUE READING

I’m happy to announce that I’ve finished converting the bulk of my old posts to an e-book, using the Yihui Xie’s wonderful {bookdown} package. The e-book is live on the docs branch of a GitHub repo. The posts (now chapters) apply concepts in the field of decision analysis to evaluate “value” in the NBA Draft. Although analysis of the NBA draft itself is certainly not novel and , I think my approach is fairly original.

CONTINUE READING

In this post, I’ll continue my discussion of working with regularly sampled interval data using R. (See my previous post for some insight regarding minute data.) The discussion here is focused more so on function design. Daily Data When I’ve worked with daily data, I’ve found that the .csv files tend to be much larger than those for data sampled on a minute basis (as a consequence of each file holding data for sub-daily intervals).

CONTINUE READING

In my job, I often work with data sampled at regular intervals. Samples may range from 5-minute intervals to daily intervals, depending on the specific task. While working with this kind of data is straightforward when its in a database (and I can use SQL), I have been in a couple of situations where the data is spread across .csv files. In these cases, I lean on R to scrape and compile the data.

CONTINUE READING