This happened to me a lot when I tried to run big models with low context windows. It would effectively run out of memory so each new token wouldn’t actually be added to the context so it would just get stuck in an infinite loop repeating the previous token. It is possible that there was a memory issue on Google’s end.
- 0 Posts
- 8 Comments
I tried to encourage fellow Linux users to just encourage one distro. It doesn’t have to be a good distro, but just one the person is least likely to run into issues with and if they do, the most likely to be able to find solutions easily for their issues. Things like Ubuntu and Mint clearly fit the bill. They can then decide later if they want to change to a different one based on what they learn from using that one.
No one listened to me, because everyone wants to recommend their personal favorite distro rather than what would lead to the least problems for the user and would be the easiest to use. A person who loves PopOS will insist the person must use PopOS. A person who loves Manjaro will insist that the person must use Manjaro. Linux users like so many different distros that this just means everyone recommends something different and just make it confusing.
I gave up even bothering after awhile. Linux will never be big on desktop unless some corporation pushes a Linux-based desktop OS.
I use Debian as my daily driver for at least a decade, but I still recommend Mint because it has all the good things about Debian with extra.
Debian developers just push out kernel updates without warning you about any possible system incompatibilities, so for example if you have an Nvidia GPU you might get a notificaton to “update” and a normie will likely press it only for the PC to boot to a black screen because Debian pushed out a kernel update that breaks compatibility with Nvidia drivers and does nothing to warn the user about it, and then a normie probably won’t know how to get out of the black screen to the TTY and roll back the update.
I remember this happening before and I had to go to the reddit for /r/Debian and respond to all the people freaking out explaining to them how to fix their system and rollback the update.
Operating systems like Ubuntu, Mint, PopOS, etc, will do more testing with their kernel before rolling it out to users. They also tend to have more up-to-date kernels. I had Debian on everything but my gaming PC that I had built recently because Debian 12 used such an old kernel that it wouldn’t support my motherboard hardware. This was a kernel-level issue and couldn’t be fixed just by installing a new driver. Normies are not going to want to compile their own kernel for their daily driver, and neither do I who has a lot of experience with Linux.
I ended up just using Mint until Debian 13 released on that PC because my only option would be to switch to the unstable or testing branch, or compile my own kernel, which neither I cared to do on a PC I just wanted to work and play Horizon or whatever on.
My issue with Wayland is just that not everything supports it. I tried switching to Wayland this year and immediately I ran into issues with software that weren’t compatible, like Steamlink would not stream over Wayland, but switching back to X11 it streamed just fine. At least in my experience, Wayland itself is not the problem, but developers not supporting Wayland is the problem. The moment I run into just one program that I want to use that doesn’t work with Wayland, I am going to permanently switch back to X11. I think most users think that way. Most don’t want to switch back and forth to use a program, if a single program doesn’t work they will just revert to X11 and stay within X11.
It is the academic consensus even among western scholars that the Ukrainian famine was indeed a famine, not an intentional genocide. This is not my opinion, but, again, the overwhelming consensus even among the most anti-communist historians like Robert Conquest who described himself as a “cold warrior.” The leading western scholar on this issue, Stephen Wheatcroft, discussed the history of this in western academia in a paper I will link below.
He discusses how there was strong debate over it being a genocide in western academia up until the Soviet Union collapsed and the Soviet archives were open. When the archives were open, many historians expected to find a “smoking gun” showing that the Soviets deliberately had a policy of starving the Ukrainians, but such a thing was never found and so even the most hardened anti-communist historians were forced to change their tune (and indeed you can find many documents showing the Soviets ordering food to Ukraine such as this one and this one).
Wheatcroft considers Conquest changing his opinion as marking an end to that “era” in academia, but he also mentions that very recently there has been a revival of the claims of “genocide,” but these are clearly motivated and pushed by the Ukrainian state for political reasons and not academic reasons. It is literally a propaganda move. There are hostilities between the current Ukrainian state and the current Russian state, and so the current Ukrainian state has a vested interest in painting the Russian state poorly, and so reviving this old myth is good for its propaganda. But it is just that, state propaganda.
Discussions in the popular narrative of famine have changed over the years. During Soviet times there was a contrast between ‘man-made’ famine and ‘denial of famine’.‘Man-made’ at this time largely meant as a result of policy. Then there was a contrast between ‘man-made on purpose’, and ‘man-made by accident’ with charges of criminal neglect and cover up. This stage seemed to have ended in 2004 when Robert Conquest agreed that the famine was not man-made on purpose. But in the following ten years there has been a revival of the ‘man-made on purpose’ side. This reflects both a reduced interest in understanding the economic history, and increased attempts by the Ukrainian government to classify the ‘famine as a genocide’. It is time to return to paying more attention to economic explanations.
That’s the thing, though. Einstein’s interpretation did not require a “miracle” because his interpretation was merely to believe quantum mechanics is incomplete because we don’t currently fully understand “what happens down there.” It was more of a statement of “I don’t know” and “we don’t have the full picture” rather than trying to put forward a full picture. Most people agree that GR is merely an approximation for a more fundamental theory and there is a lot of work on speculative models to potentially replace it one day, like String Theory or Loop Quantum Gravity. But it has become rather taboo to suggest that maybe quantum mechanics is not the most fundamental “final” theory either and that maybe potential speculative replacements for it should be studied as well.
Those were the kinds of things that interested Einstein in his later years. He had published a paper “Does Schrodinger’s Wave Mechanics Completely Determine the Motion of a System, or Only Statistically?” where he proposed an underlying model similar to pilot wave theory, although later retracted it because it was later showed to him to be nonlocal and he hoped to get rid of then nonlocal aspect of it. He had published a paper earlier titled “Does Field Theory Offer Possibilities for the Solution of the Quantum Problem?” in which he had hoped to figure out if you could use an overdetermined system of differential equations to restrict the possible initial configurations of the system such that it would not be physically possible for the experimenter to choose the initial conditions of the experiment freely. If he was still alive today, he would probably take interest in the works of people like Gerard 't Hooft.
Most interpertations say “we know what happens down there,” meanwhile Einstein’s interpretation was a more humble one of saying we do not know yet.
My issue with the orthodox interpretations is not that they are random but that they contain miracles. This was John Bell’s original criticism that people seem to have forgotten. The Copenhagen interpretation says that there is a quantum world until you measure it, then a miracle happens, and you have a classical result, but it does not tell you at all how this process actually works. The Many Worlds Interpretation, which is the second most popular, just denies that the classical world made up of observable particles in 3D space where experiments actually have outcomes actually even exist and posits it’s a grand illusion created by the conscious mind, but also cannot explain how this illusion can possibly come about and just vaguely gestures to it having something to do with consciousness. They just punt the miracle over to neuroscience and ultimately do not answer anything either. A lot of people think Einstein wasn’t the biggest of quantum mechanics due to the randomness, but if you actually read his works, he was clear the issue was that it does not give you a coherent complete picture of reality, so he just thought it was incomplete, an approximation of a more fundamental theory that we have yet to discover.





At least llama.cpp doesn’t seem to do that by default. If it overruns the context window it just blorps.