To validate the deepMedic neural network algorithm, I created 30 models with identical parameters but varied subsets of training and validation cases from the Comprehensive Neuro-Oncology Data Repository (CONDR). Each model was trained and validated using 30 and 11 T1-weighted, post-contrast, magnetic resonance images, respectively. Images were randomly selected to be part of the training set with uniform probability. 10 of the models were trained using the same set of imaging studies but with signal intensity normalized to have zero mean and unit variance—a technique suggested by deepMedic developers. Model prediction accuracy was evaluated using: dice similarity coefficient (DSC), surface distance (SD) hausdorff surface distance (HSD), and euclidian distance of tumor centroids (ED). Ordinary least squares regression was used to evaluate the effect of normalization on measures of prediction accuracy.
I am currently exploring variation in validation accuracy between models trained with a single large training set and models that are trained on a smaller set but retrained with new inputs.
Using income tax data, Chetty, and colleagues (2014) identified the effects geography and social mobility. This project is a client side interactive tool for mapping and visualizing correlation with scatter plots, trend-lines, and OLS regression tables. Features to allow users to query and merge data from the US Census Bureau APIs for county level information is currently under development.
The standard dataset includes a large number of estimated exposure effects of living in a given county on earnings in adulthood. There are also a large number of covariates that may be of interest such as racial segregation, education, measures of social capital, access to institutions, etc.
Spatial genetic algorithm
In collaboration with Ariel Gómez, we developed a spatial model for development of an optimal strategy using a genetic algorithm.
Agents on and N × N torus had a randomly initialized set of strategies to play rock, paper, scissors in the von Neumann neighborhood. The agents in a given percentile of winning games were selected to “mate” and the torus was re-initialized with their progeny. Agents moves were governed—in principle—by memory of the moves made by agents in a given neighborhood.
Additionally, agents were initialized with a random number of agents to challenge in their given neighborhood. In the subsequent round, agents challenged the number agents that were challenged by the highest scoring agent in a given neighborhood of the previous round.
We then fit models for trends in the number of challenges made by agents in the population as well as systems of differential equations for the number of times an agent would play rock, paper, or scissors.