Does it mean that production reached the level where intermittence becomes problematic?
It is llama3-8B so it is not out of question but I am not sure how much memory you would need to really go to 1M context window. They use ring attention to achieve high context window, which I am unfamiliar with but that seems to lower greatly the memory requirements.
To actually read how they did it, here is there model page: https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k
Approach:
- meta-llama/Meta-Llama-3-8B-Instruct as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data
For training data, we generate long contexts by augmenting SlimPajama. We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2].
I know. But we know it is “just” an engineering problem which can be solved at a high cost.
Fusion is a field where you can’t have the “statup mindset”: investments are in hundreds of millions and take at best a decade (and most likely two) to pay off. That’s one field where it can’t go anywhere without public funding.
It is very possible that China gets there first, considering how ridiculous western fusion efforts have been.
Some assholes are rich.
Psychopathy is a thing and touches a sizeable part of the population. It is unwise to dismiss their existence.
One of the most enlightening moments for me recently has been when a sociology researcher attempted an experiment on youtube to prove that we can organize without hierarchy. His main point was not what was interesting to me.
His experiment was actually flawed in a major way: he proposed a task to a group of 100 that was doable even by a single person. In such a case, organization is easy. But what I found interesting is that even in such a setting hierarchies emerged: people took some organizational power and others followed. Even if that was clearly unnecessary. And the crowd following his channel are probably less authoritarian than average.
It was a revelation to me: to have flat structures, you not only need to make it possible to organize without hierarchy, but you also need a process to constantly weed out emerging hierarchies. Another theory is that you should rather explicit some lesser-evil hierarchies to prevent the emergence of others, in the same way you may let one weed grow to prevent the emergence of other less desirable ones.
I still don’t have a theory or a praxis that goes with it, but that has been good food for thought.
“Theft” is actually legal. Sharing (what they call “piracy”) is not. How about getting the fucking copyright reform that we should have done two decades ago?
It would probably be more effective to put an explicit mention in the system prompt. “Your interlocutor is a <gendered term> and will be greatly offended to be refered to as a boy or a man.”
Bah c’est bien cool mais si on se met tous à parler notre langue nationale ici, on va avoir un peu de mal à échanger, non?
Let me guess: open source?
That’s according to a peer-reviewed study funded by the Ford Motor Company, a company that makes most of its profits from gas-powered vehicles.
If you want to see if a tech is part of a renewable future, it is direct emissions that should be counted. EVs are at zero. They don’t emit CO2 when running, when being produced or when being disposed of. They use electricity and transport, two things that we can provide without emitting CO2. They are a piece of the puzzle of a sustainable society, something thermal cars will never be, and something these graphs hide.
Of course we will be better off without cars and trucks, but the road towards them being totally gone is long, and it is time we don’t have.
OpenAI should be fine. They are leaders but there are plenty of competitors.
Microsoft is in a much more dominant situation and will have to argue that Google competes with them, which is true but may be hard to sell given the fact that I dont think Google offers its TPU services to any other company.
NVidia is in a situation of monopoly. For them it will be hard to argue otherwise. AMD is simply not there, no one using it.
And this is why research is going in another direction: smaller models which allow easier experiments.
I am pretty sure that there are ASIC being put in production as we speak with Whisper embeded. Expect a 4 dollars chip to add voice recognition and a basic LLM to any appliance.
Also, as a side effect, we just solve speech recognition. In a year or two, speaking to machines will be the default interface.
Your assumptions are far more numerous and offensive than that. From you thinking that I know nothing about discrimination at work or my driving habits, or even assuming that you are more to the left than I am or that I criticize your positions for being leftist rather than being wrong.
The cherry on the top of you laying down a dozen of wrong accusation is you calling my attitude patronizing and belittling.
There is a company-wide demotivation plague at Google. Don’t blame middle manager, it extends to the top.
The deers of Nara show that giving them food and protecting them is an easy way to achieve that.
I had never seen deers as aggressive as monkeys towards humans!