Based on success of NASA's FedEx to the moon project yesterday, I'm proposing data centers on the moon. Why put all that heat in our atmosphere? So I've got compute solved, I'm just waiting on energy. Looking at you IP and TT.
Dissipating heat in a vacuum is rough, but with ground contact, it should be very doable. Will need to take a bunch of water with you to run into the ground for cooling. Also the 2-3 second latency of being on the Moon will be a pain, but only for real time applications. Research and other such things don't rely on quick latency.
Water is already there: https://science.nasa.gov/moon/moon-...s not a riddle, it's,of, grains of lunar dust.
A current area of research is more local inference. Instead of large language models that are broadly (but not deeply) knowledgeable, use medium and small language models to do tasks, and larger models as controllers (or managers...). Real time applications will still be fine, because the inference can run on smaller low voltage devices. Training and creating (images, code, music, videos...) these are the power pulls, and they don't need to be fast.
Right. The point is that real time applications will use local (on Earth) low power (low heat) means to infer. The goal being that the task is inferred at the point of the task, rather than going back to a data center. So things like computer vision run on the camera, rather than taking the image to a data center and running inference there, and then sending the result back. (Go look at the Jetson series of devices). The things that require data centers (high power, high heat) can tolerate latency.
All I know is from moon landing-> slide rule -> calculator -> to all the other cool stuff in my life time, I am optimistic y'all will figure it out. I'm excite.
This sounds suspiciously like the model version of outsourcing. Big models pushing off tasks to smaller "narrowly skilled" models for effort savings.
I have thoughts on this, but I'm told they waiver between religion on the high side, and cultish on the low (which I find is just a difference of scale). So I'll say, for me, you'll be fine. I have faith.
Yes. Your floor vacuum needs to understand that you want it to do a better job in the corners. But it doesn't need to know how to write a python script to interface with some random API. It can pass that request along to the moon and the stars, metaphorically speaking.
A second up and a second down sandwiched between the processing time is not really an impediment. Wow, imagine being 5 or 10 seconds away from any sort of custom script. That's wild.
Elon Musk sues OpenAI accusing it of putting profit before humanity https://www.theguardian.com/technol...ues-open-ai-profit-power-microsoft-sam-altman
Reminds me of last year when he wanted everyone to pause AI development for 6 months to give his start-up time to catch up.
No, this is different . OpenAI was set up as a nonprofit open source enterprise. It was to be open source and not for profit, it is now closed source and for-profit.
I want to say the open source side still exists, but now there is a lot of (most) effort to develop products as well. Is that enough to withstand scrutiny I don't know. But Musk is only interested in it from X.ai's perspective which is taking competitors off the street.
I agree that this is likely his true motivation, but it is also true that their original mission did not include making closed products for private companies.