• 0 Posts
  • 323 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle




  • Useful context: I am a biochemist with a passing interest in neuroscience (plus some friends who work in neuroscience research).

    A brief minor point is that you should consider uploading the preprint as a pdf instead, as .docx can cause formatting errors if people aren’t using the same word processor as you. Personally, I saw some formatting issues related to this (though nothing too serious).

    Onto the content of your work, something I think your paper would benefit from is linking to established research throughout. Academia’s insistence on good citations throughout can feel like it’s mostly just gatekeeping, but it’s pretty valuable for demonstrating that you’re aware of the existing research in the area. This is especially important for research in a topic like this tends to attract a lot of cranks (my friends tell me that they fairly frequently get slightly unhinged emails from people who are adamant that they have solved the theory of consciousness). Citations throughout the body of your research makes it clear what points are your own, and what is the established research.

    Making it clear what you’re drawing on is especially important for interdisciplinary research like this, because it helps people who know one part of things really well, but don’t know much about the others. For example, although I am familiar with Friston’s paper, I don’t know what has happened in the field since then. I also know some information theory stuff, but not much. Citations are way of implicitly saying “if you’re not clear on where we’re getting this particular thing from, you can go read more here”.

    For example, if you have a bit that’s made up of 2 statements:

    • (1): Something that’s either explicitly stated in Friston’s paper, or is a straightforwardly clear consequence of something explicitly stated
    • (2): Something that your analysis is adding to Friston’s as a novel insight or angle

    Then you can make statement 2 go down far easier if that first statement. I use Friston in this example both because I am familiar with the work, but also because I know that that paper was somewhat controversial in some of its assumptions or conclusions. Making it clear what points are new ones you’re making vs. established stuff that’s already been thoroughly discussed in its field can act sort of like a firebreak against criticism, where you can have the best of both worlds of being able to build on top of existing research while also saying “hey, if you have beef with that original take, go take it up with them, not us”. It also makes it easier for someone to know what’s relevant to them: a neuroscientist studying consciousness who doesn’t vibe with Friston’s approach would not have much to gain from your paper, for instance.

    It’s also useful to do some amount of summarising the research you’re building on, because this helps to situate your research. What’s neuroscience’s response to Friston’s paper? Has there been much research building upon it? I know there have been criticisms against it, and that can also be a valid angle to cover, especially if your work helps seal up some holes in that original research (or makes the theory more useful such that it’s easier to overlook the few holes). My understanding is that the neuroscientific answer to “what even is consciousness?” is that we still don’t know, and that there are many competing theories and frameworks. You don’t need to cover all of those, but you do need to justify why you’re building upon this particular approach.

    In this case specifically, I suspect that the reason for building upon Friston is because part of the appeal of his work is that it allows for this kind of mathsy approach to things. Because of this, I would expect to see at least some discussion of some of the critiques of the free energy principle as applied to neuroscience, namely that:

    • The “Bayesian brain” has been argued as being an oversimplification
    • Some argue that the application of physical principles to biological systems in this manner is unjustified (this is linked to the oversimplification charge)
    • Maths based models like this are hard to empirically test.

    Linked to the empirical testing, when I read the phrase “yielding testable implications for cognitive neuroscience”, I skipped ahead because I was intrigued to see what testable things you were suggesting, but I was disappointed to not see something more concrete on the neuroscience side. Although you state

    “The values of dI/dT can be empirically correlated with neuro-metabolic and cognitive markers — for example, the rate of neural integration, changes in neural network entropy, or the energetic cost of predictive error.”

    that wasn’t much to go on for learning about current methods used to measure these things. Like I say, I’m very much not a neuroscientist, just someone with an interest in the topic, which is why I was interested to see how you proposed to link this to empirical data.

    I know you go more into depth on some parts of this in section 8, but I had my concerns there too. For instance, in section 8.1, I am doubtful of whether varying the temporal rate of novelty as you describe would be able to cause metabolic changes that would be detectable using the experimental methods you propose. Aren’t the energy changes we’re talking about super small? I’d also expect that for a simple visual input, there wouldn’t necessarily be much metabolic impact if the brain were able to make use of prior learning involving visual processing.

    I hope this feedback is useful, and hopefully not too demoralising. I think your work looks super interesting and the last thing I want to do is gatekeep people from participating in research. I know a few independent researchers, and indeed, it looks like I might end up on that path myself, so God knows I need to believe that doing independent research that’s taken seriously is possible. Unfortunately, to make one’s research acceptable to the academic community requires jumping through a bunch of hoops like following good citation practice. Some of these requirements are a bit bullshit and gatekeepy, but a lot of them are an essential part of how the research community has learned to interface with the impossible deluge of new work they’re expected to keep up to date on. Interdisciplinary research makes it especially difficult to situate one’s work in the wider context of things. I like your idea though, and think it’s worth developing.


  • I liked that although Knights of Guinevere was clearly ragging on Disney, it felt like it wasn’t just a cathartic trauma dump from Dana Terrace and crew — it was actually being used to say something meaningful. It’s a good sign when the pilot episode of a show has such a strong sense of themes.

    I’d heard a lot of hype when the pilot was released, but didn’t get around to watching it until I randomly thought “I wonder what Dana Terrace is up to nowadays? Hopefully she’s working somewhere better than Disney, because surely there must be someone with power out there who recognised how Disney was squandering her potential”. When I saw that it was her and some of the Owl House team who made Knights of Guinevere, that caused me to immediately go watch it. The only disappointment was that we don’t know when new episodes will be available, but hopefully things will be regular once we do start getting episodes.



  • So many people outside of academia are gobsmacked to learn the extent to which academic publishing relies on free labour, and how much they charge.

    To publish a paper open access in Nature, it costs almost $7000. And for what? What the fuck do they actually do? If you want to make the data or code you used in your analysis available, you’re the one who has to figure out how to host it. They don’t provide copyediting services or anything of the like. I’d call them parasites, but that would be an insult to all the parasitic organisms that play important roles within their respective ecosystems.

    Perhaps once, they served an essential role in facilitating research, back when physical journals were the only way to get your research out there, but that age has long since passed and they’ve managed to use that change to profit even more.

    Sure, the individual researchers are rarely paying this fee themselves, but that’s still a problem. For one, it gatekeeps independent researchers, or researchers from less well funded academic institutions (such as in the global South or emerging economies). Plus even if the individual researchers aren’t paying directly, that money still comes out of the overall funding for the project. For the cost of 4 papers published in Nature, that’s an entire year’s stipend for a PhD student in my country. I’m using Nature as an example here because they are one of the more expensive ones, but even smaller papers charge exorbitant amounts (and don’t get me started on how people who justify the large fees charged by more prestigious journals don’t acknowledge how this just perpetuates the prestige machine that creates the toxic “publish or perish” pressure of research)

    he most offensive bit though is that if you are doing government funded research, then you have to pay an extra fee to make that research available to the taxpayers who funded it. It’s our fucking research, you assholes! How dare you profit off of coerced free labour and then charge us to even be able to access what is rightfully ours. France has the right idea here — they have legislation that mandates that all government funded research must be open access. That doesn’t solve the root problem of needing to eradicate the blight of the academic publishing industry as it currently exists, but it’s a start.

    I know I’m preaching to the choir here, but once I started writing, my rage overcame me and it was cathartic to scream it out from my soapbox.





  • I’ve recently been playing it and I’ve been blown away.

    I find the parry system not too bad actually. I’m more than half way through the game and I’ve only recently started properly learning how to parry, but I love how easy the game makes it to learn. Context for anyone who hasn’t played the game: when you successfully dodge, you see the words “dodge” rather than damage numbers. When you do a perfect dodge, you see the words “perfect dodge”. The window for parrying is smaller than for dodging, but this dodge system meant that when I started noticing I was somewhat consistently getting perfect dodges, I decided I should try parrying more often.

    I find the overall difficulty tuning to be excellent. Even on normal difficulty, it’s definitely challenging at points, but it feels extremely fair. You won’t be able to defeat all enemies that you’re able to access at any given time — there’s so many times that I’ve tried my chances with a big guy just hanging out on the map, only to get my team wiped in one hit. However, the open world and much to explore means I can go away and come back later. Upgrade materials are scattered all over, so exploration is super powerful.

    I agree with you that the highlight of the game is how beautiful it is. There have been a few times where I’ve had to stop for a moment and just take in the scenery when it was so soul achingly beautiful that I could scarcely think.


  • You’re literally quoting marketing materials to me. For what it’s worth, I’ve already done more than enough research to understand where the technology is at; I dove deep into learning about machine learning in 2020, when AlphaFold 2 was taking the structural biology world by storm — I wanted to understand how it had done what it had, which started a long journey of accidentally becoming a machine learning expert (at least, compared to other biochemists and laypeople).

    That knowledge informs the view in my original comment. I am (or at least, was) incredibly excited about the possibilities, and I do find much of this extremely cool. However, what has dulled my hype is how AI is being indiscriminately shoved into every orifice of society when the technology simply isn’t mature enough for that yet. Will there be some fields that experience blazing productivity gains? Certainly. But I fear any gains will be more than negated through losses in sectors where AI should not be deployed, or it should be applied more wisely.

    Fundamentally, when considering its wider effect on society, I simply can’t trust the technology — because in the vast majority of cases where it’s being pushed, there’s a thoroughly distrustful corporation behind it. What’s more, there’s increasing evidence that this just simply isn’t scalable. When you look at the actual money behind it, it becomes clear that the reason why it’s being pushed as a magical universal multi tool is because the companies making these models can’t make them profitable, but if they can drum up enough investor hype, they can keep kicking that can down the road. And you’re doing their work for them — you’re literally quoting advertising materials for me; I hope you’re at least getting paid for it.

    I remain convinced that the models that are most prominent today are not going to be what causes mass automation on the scale you’re suggesting. They will, no doubt, continue to improve — there’s so many angles of attack on that front: Mixture of Experts (MoE) and model distillation to reduce model size (this is what made DeepSeek so effective); Retrieval Augmented Generation (RAG) to reduce hallucinations and allow for fine-tuning of output based on a small scale based on a supplementary knowledgebase; reducing the harmful effects of training on synthetic data so you can do more of it before model collapse happens — there’s countless ways that they can incrementally improve things, but it’s just not enough to overcome the hard limits on these kinds of models.

    My biggest concern, as a scientist, is that what additional progress there could be in this field is being hampered by the excessive evangelising of AI by investors and other monied interests. For example, if a company wanted to make a bot for low-risk customer service or internal knowledgebase used RAG, this would require the model to have access to high quality documentation to draw from — and speaking as someone who has contributed a few times to open-source software documentation, let me tell you that that documentation is, on average, pretty poor quality (and open source is typically better than closed source for this, which doesn’t bode well). Devaluing of human expertise and labour is just shooting ourselves in the foot because what is there to train on if most of the human writers are sacked.

    Plus there’s the typical old notion around automation leading to loss of low skilled jobs, but the creation of high skilled roles to fix and maintain the “robots”. This isn’t even what’s happening, in my experience. Even people in highly skilled, not-currently-possible-to-automate jobs are being pushed towards AI pipelines that are systematically deskilling them; we have skilled computer scientists and data scientists who are unable to understand what goes wrong when one of these systems fucks up, because all the biggest models are just closed boxes, and “troubleshooting” means acting like an entry level IT technician and just trying variations of turning it off and on again. It’s not reasonable to expect these systems to be perfect — after all, humans aren’t perfect. However, if we are relying on systems that tend to make errors that are harder for human oversight to catch, as well as reducing the number of people trying to catch them, then that’s a recipe for trouble.

    Now, I suspect here is where you might say “why bother having humans try to catch the errors when we have multimodal agentic models that are able to do it all”. My answer to that is that it’s a massive security hole. Humans aren’t great at vetting AI output, but we are tremendously good at breaking it. I feel like I read a paper for some ingeniously novel hack of AI every week (using “hack” as a general term for all prompt injection, jailbreak etc. stuff). I return to my earlier point: the technology is not mature enough for such widespread, indiscriminate rollout.

    Finally, we have the problem of legal liability. There’s that old IBM slide that’s repeatedly done the rounds the last few years that says “A computer can never be held accountable, therefore a computer must never make a management decision.”. Often the reason why we need humans to keep an eye on systems is that legal systems demand at least the semblance of accountability, and we don’t have legal frameworks for figuring out what the hell to do when AI or other machine learning systems mess up. It was recently in the news about police officers going to ticket an automated taxi (a Waymo, I think) when it broke traffic laws, and not knowing what to do when they found it was driverless. Sure, parking fines can be sent to the company, that doesn’t seem too hard to write regulations for, but with human drivers, if you incur a large number of small violations, it’s typical to end up with a larger punishment, such as one’s driver’s licence being suspended. What would even be the equivalent level of higher punishment for driverless vehicles? It seems that no-one knows, and concerns like these are causing regulators to reconsider the rollout of them. Sure, new laws can be passed, but our legislators are often tech illiterate, so I don’t expect them to easily be able to solve what prominent legal and technology scholars are still grappling with. That process will take time, and the more that we see high profile cases like suicides following chatbot conversations, the cautious legislators will be. Public distrust of AI is growing, in large part because they feel like it’s being forced on them, and that will just harm the technology in the long run.

    I genuinely am excited still about the nuts and bolts of how all this stuff works. It’s my genuine enthusiasm that I feel situates me well to criticise the technology, because I’m coming from an earnest place of wanting to see humans make cool stuff that improves lives — that’s why I became a scientist, after all. This, however, does not feel like progress. Technology doesn’t exist in a vacuum and if we don’t reckon with the real harms and risks of a new tool, we risk shutting ourselves off to the positive outcomes too.



  • This sounds interesting. It reminds me of past workers movements in history, namely the Luddites and the UK miners strike. If you want to learn more about the Luddites and what they were asking for, the journalist Brian Merchant has a good book named “Blood in the Machine”.

    Closer to my heart and my lived experience is the miner’s strike. I wasn’t born at the time, but I grew up in what I semi-affectionately call a “post industrial shit hole”. A friend once expressed curiosity about what an alternative to shutting the mines would have been, especially in light of our increasing knowledge of needing to move away from fossil fuels. A big problem with what happened with the mines is that there were entire communities that were effectively based around the mines.

    These communities often did have other sources of industry and commerce, but with the mines gone, it fucked everything up. There weren’t enough opportunities for people afterwards, especially because miners skills and experience couldn’t easily translate to other skilled work. Even if a heckton of money had been provided to “re-skill” out of work miners, that wouldn’t have been enough to absorb the economic calamity caused by abruptly closing a mine, precisely because of how locally concentrated and effect would be. If done all at once, for instance, you’d find a severe shortage of teachers and trainers, who would then find themselves in a similar position of needing to either move elsewhere to find work, or train in a different field. The key was that there needed to be a transition plan that would acknowledge the human and economic realities of closing the mines.

    Many argued, even at the time, that a gradual transition plan that actually cared about the communities affected would lead to much greater prosperity for all. Having grown up amongst the festering wounds of the miners strike, I feel this to be true. Up in the North of England, there are many who feel like they have been forgotten or discarded by the system. That causes people a lot of pain; I think it’s typical for people to want their lives to be useful in some way, but the Northern, working class manifestation of this instinct is particularly distinct.

    Linking this back to your question, I think that framing it as compensation could help, but I would expect opposition to remain as long as people don’t feel like they have ways to be useful. A surprising contingent of people who dislike social security payments that involve “getting something for nothing” are people who themselves would be beneficiaries of such payments. I link this perspective to listlessness I describe in ex-mining communities. Whilst the vast majority of us are chronically overworked (including those who may be suffering from underemployment due to automation), most people do actually want to work. Humans are social creatures, and our capacities are incredibly versatile, so it’s only natural for us to want to labour towards some greater good. I think that any successful implementation of universal basic income would require that we speak to this desire in people, and help to build a sense that having their basic living costs accounted for is an opportunity for them to do something meaningful with their time.

    Voluntary work is the straightforward answer to this, and indeed, some of the most fulfilled people I know are those who can afford to work very little (or not at all), but are able to spend their time on things they care about. However, I see so many people not recognise what they’re doing as meaningful labour. For example, I go to a philosophy discussion group where there is one main person who liaises with the venue, collects the small fee every week (£3 per person), updates the online description for the event and keeps track of who is running each session, recruiting volunteers as needed. He doesn’t recognise the work he does as being that much work, and certainly doesn’t feel it’s enough to warrant the word “labour”. “It’s just something I do to help”; “You’re making it sound like something larger than it is — someone has to do it”. I found myself (affectionately) frustrated during this conversation because it highlights something I see everywhere: how capitalism encourages us to devalue our own labour, especially reproductive labour and other socially valuable labour. There are insufficient opportunities for meaningful contribution within the voluntary sector as it exists now, but so much of what people could and would be doing more of exists outside of that sector.

    We need a cultural shift in how we think about work. However, it’s harder to facilitate that cultural shift towards how we view labour if most people are forced to only see their labour in terms of wages and salaries. On the other hand, people are more likely to resist policies like UBI if they feel it presents a threat to their work-centred identity and their ability to conceive of their existence as valuable. It’s a tricky chicken-or-egg problem. Overall, this is why I think your framing could be useful, but is not likely to be sufficient to change people’s minds. I think that UBI or similar certainly is possible, but it’s hard to imagine it being implemented in our current context due to how radical it is. Far be it from me to shy away from radical choices, but I think that it’s necessary to think of intermediary steps towards cultivating class consciousness and allowing people to conceive of a world where their Intrinsic value is decoupled from their output under capitalism. For instance, I can’t fathom how universal basic income would work in a US without universal healthcare. It boggles my mind how badly health insurance acts to reinforce coercive labour relations. The best thing we can do to improve people’s opinion of universal basic income is to improve their material conditions.

    Finally, on AI. I think my biggest disagreement with Automation Compensation as a framing device for UBI is that it inadvertently falls into the trap of “tech critihype”, which the linked author describes as “[inverting] boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.”. Critihype may appear to criticise something, but actually ends up feeding the hype cycle, and in turn, is nourished by it. The problem with AI isn’t that it is going to end up replacing a significant chunk of the workforce, but rather that penny-pinching managers can be convinced that AI is (or will be) able do that.

    I like the way that Brian Merchant describes the real problem of AI on his blog:

    "[…] the real AI jobs crisis is that the drumbeat, marketing, and pop culture of “powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.”

    This critical approach is extra important when we consider that the jobs and fields most heavily being affected by AI are in creative fields. We’ve probably all seen memes that say “I want an AI to automate doing the dishes so that I can do art, not automate doing art so I can spend more time doing the dishes”. Universal Basic Income would be limited in alleviating social angst unless we can disrupt the pervasive devaluation of human life and effort that the AI hype machine is powering.

    Though I have ended up disagreeing with your suggestion, thanks for posing this question. It’s an interesting one to ponder, and I certainly didn’t expect to write this much when I started. I hope you find my response equally interesting.




  • For a while, I was subscribed as a patron to Elisabeth Bik’s Patroeon. She’s a microbiologist turned “Science Integrity Specialist” which means she investigates and exposes scientific fraud. Despite doing work that’s essential to science, she has struggled to get funding because there’s a weird stigma around what she does; It’s not uncommon to hear scientists speak of people like her negatively, because they perceive anti-fraud work as being harmful to public trust in science (which is obviously absurd, because surely recognising that auditing the integrity of research is necessary for building and maintaining trust in science).

    Anyway, I mention this because it’s one of the most dystopian things I’ve directly experienced in recent years. A lot of scientists and other academics I know are struggling financially, even though they’re better funded than she is, so I can imagine that it’s even worse for her. How fucked up is it for scientific researchers to have to rely on patrons like me (especially when people like me are also struggling with rising living costs).


  • Sometimes I do get YouTube telling me that I need to disable my adblocker to access a video, so they do try to block that stuff (though I suspect that the infrequency with which this happens combined with the fact that not everyone does experience it when some people do report this happening suggests that they’re just testing methods of detection and blocking)

    Usually when it happens, I just go into my Ublock settings and update stuff. I can’t remember that ever not working. It feels like a low-key arms race, in a cold-war kind of way