Sunday, November 19, 2023

PFun Fiona, AI-MD

 A sneak peek of PFun Fiona, AI-MD-- the one-and-only Physiofunctional Digital Health Assistant.



 

 

 

 

 

 

 

 

 

 

 

Click to learn more about PFun Digital Health.

Tuesday, May 16, 2023

For fun!

 


Friday, May 5, 2023

Messing around with music & stuff

(Updated: 2023-05-16)

I've been messing around with the read/write methods in scipy.io.wavfile to make some "music" using the output of some of my biophysical models.

Example 1

So for example, here's some simulation output with an audio track (caution, may be loud):


So another interesting point... these sounds can be parametrized...

Example 2

Here's another example with different parameters (again, caution of loudness):


So as you can see, there's a wide range of different sound patterns that can be produced, even with just this one model.

Next Steps...

I'm thinking I'll make a public API at some point in the near future so you can make your own physiological music... 😉


Update!

Some new examples... with rhythm!

Thanks to my friend's sage advice, here are a few examples with a subset of parameters mapped to band pass filters.

Example 2-0

We can get some rhythm by utilizing a binary column as a low-pass filter...

Example 2-1

Example 2-2

Here's another with a few different band-pass filters that produces an interesting pattern...





Tuesday, April 25, 2023

Friday, October 14, 2022

Dawn of a new era!

It's been a while! Here are some graphical non-sequiturs for your viewing pleasure:

Monday, May 14, 2018

Dynamical systems model of cardio-respiratory interactions

Currently I'm working on a model of cardio-respiratory interactions. Given human experimental data, the model reproduces the average heart rate, the average respiratory phase duration (inspiration/expiration), and the Respiratory Sinus Arrhythmia (RSA). The most sophisticated part of the model is the respiratory central pattern generator (previously published). My current goal is to optimize the model so that it reproduces RSA for several individuals.

After I get it working with the current dataset-- which was measured while participants watched Disney's Fantasia-- the next step is to fit the model to real clinical data from septic ER patients. The end goal is for our collaborators to use our model as a basis for a machine learning algorithm that predicts the risk and most likely cause of death for septic patients.

Below are some pretty pictures generated by the model.

Monday, January 16, 2017

Haiku Twitter Bot

Over summer 2016, I decided to make a Twitter bot just for fun. Here, I'll describe the bot's programming at a high level. But first, a little background...

I started undergraduate as an english major... then graduated with a degree in neuroscience. Although I have pretty firmly switched my career plans toward math and science, I still appreciate and respect art. As a reflection on my artistic pipedream, I designed my bot to write haikus. If you don't remember haikus from high school, hit that link in the previous sentence.

For connecting to Twitter, I used the tweepy library. I simply wrote a subclass of StreamListener to process incoming tweets and write them to a file called tweets.txt. You can read more about this process in the tweepy docs (linked above).

Each time I start my bot, most of the high-level functionality is located in one function, do_tweet().

For counting syllables and determining parts of speech, I used NLTK.

Counting syllables:


Choosing words for the haiku:

As you can see, the algorithm randomly chooses a word of syllable length sw, which is generated by the pick_syl() function. If this is hard to interpret, don't worry. It took me a while to come up with an algorithm that would randomly choose words but still adhere to the 5/7/5 haiku format. The conditions like if ix == 0 and line == 0 are there to determine which line of the haiku is being written. In this case, the first word of the first line is being chosen. Then the lines:

sdict1 = dict([(sk, sv) for sk, sv in sdict.items() if pos_tag(word_tokenize(sk),
tagset='universal')[0][1]=='NOUN'])
# Chooses a noun
gsyls = syl_of_size(sdict1, sw) # Chooses a noun of syllable length sw

Then, setting up the haiku in order and writing to the file haiku.txt:

So currently this bot would write haikus with randomly chosen words, which is cool... but we can do better! ;^)
Eventually when I have a little more time, I'll give meta ai ai haiku more smarts, but for now the bot uses a simple algorithm for determining the "best" tweets, then recycles words from these tweets. Dumb, I know... it's a work in progress.


The function get_best() chooses tweets with the highest "weight" to include in the bot's corpus for writing future haikus.

Here's a link to my Twitter bot meta ai ai haiku: robcapps.com/docs/haiku. Please feel free to ask questions in the comments section!