I’ve been working hard this last month after returning from my summer holidays.
Firstly, I’m preparing my first article in Finnish for Kulutustutkimus.Nyt, in order to introduce myself to my fellow Finnish consumer economics researchers. The first version was wrapping up everything and anything I did last academic year, including all my writings, research plans, presentations and actual analyses. At one stage, the text was 60 pages long! To make things more sensible, I gave a first presentation (second version of the text) to my colleagues at Consumer society research center, Sep 8th. My goal was to hear their ideas on what to include and what to focus on. After their invaluable feedback I managed to reduce the text (current version) to the 4500 words that is the maximum for the journal.
Secondly, I set out to read some, and to do some more analysis on my material. I will discuss the reading & theories part some other time. But during the summer and my Singapore trip, I got another version of an interesting part of the data set from Fin-Clarin. (Downloading it took several nights, and writing code to clean it up, a couple of days.) LDA analysis of a few samples and then the whole gave me an idea of the context, of what has been happening at the forum. This analysis I can report in my Finnish-language article (descriptive presentation on how algorithms can be used), I can present it at the PSRC conference (more focused on the potential to find what my research questions expects to find), and at our economic & social history research seminar (combination of both). And this I can develop further into an English-language article!
My first article will most likely be a methodological paper that has two purposes: one, to enable me to limit my data set into something that includes “only” the discussions that interest me that that will include a temporal aspect and a narrative of weight-related health issues, two, represent and formalize the qualitative part of the analysis where algorithms help the researcher to focus their gaze into the phenomena present in the massive material. What I have in mind is developing, piloting and visualizing the outcome from a microscope, or magnifying glass for big data. That will formalize one part of “data science art”, to make it replicable, and duly help the future massive-data qualitative researcher without much knowledge of programming. But, depending on what I find, the focus will need some developing after comments from my audiences. It is all very exciting!