Artificial General Intelligence and the Future

Discussion in 'The Thunderdome' started by NorrisAlan, Jun 5, 2023.

  1. fl0at_

    fl0at_ Humorless, asinine, joyless pr*ck

    Based on success of NASA's FedEx to the moon project yesterday, I'm proposing data centers on the moon.

    Why put all that heat in our atmosphere?

    So I've got compute solved, I'm just waiting on energy. Looking at you IP and TT.
     
  2. IP

    IP Super Moderator

    The moon is a great place for solar.
     
    NorrisAlan likes this.
  3. NorrisAlan

    NorrisAlan Founder of the Mike Honcho Fan Club

    Dissipating heat in a vacuum is rough, but with ground contact, it should be very doable. Will need to take a bunch of water with you to run into the ground for cooling.

    Also the 2-3 second latency of being on the Moon will be a pain, but only for real time applications. Research and other such things don't rely on quick latency.
     
  4. IP

    IP Super Moderator

    Water is already there: https://science.nasa.gov/moon/moon-...s not a riddle, it's,of, grains of lunar dust.
     
  5. Poppa T

    Poppa T Vol Geezer

    Y'all just figure this shit out before I die. It will be awesome.
     
  6. lumberjack4

    lumberjack4 Chieftain

    Quantum Physics. Just stick an Ansible on the moon. Latency solved.
     
  7. fl0at_

    fl0at_ Humorless, asinine, joyless pr*ck

    A current area of research is more local inference. Instead of large language models that are broadly (but not deeply) knowledgeable, use medium and small language models to do tasks, and larger models as controllers (or managers...).

    Real time applications will still be fine, because the inference can run on smaller low voltage devices.

    Training and creating (images, code, music, videos...) these are the power pulls, and they don't need to be fast.
     
  8. NorrisAlan

    NorrisAlan Founder of the Mike Honcho Fan Club

    I don't think smaller models will cut down on the moon being over a light second away from earth.
     
  9. fl0at_

    fl0at_ Humorless, asinine, joyless pr*ck

    Right. The point is that real time applications will use local (on Earth) low power (low heat) means to infer.

    The goal being that the task is inferred at the point of the task, rather than going back to a data center. So things like computer vision run on the camera, rather than taking the image to a data center and running inference there, and then sending the result back. (Go look at the Jetson series of devices).

    The things that require data centers (high power, high heat) can tolerate latency.
     
    NorrisAlan likes this.
  10. Poppa T

    Poppa T Vol Geezer

    All I know is from moon landing-> slide rule -> calculator -> to all the other cool stuff in my life time, I am optimistic y'all will figure it out.

    I'm excite.
     
  11. IP

    IP Super Moderator

    This sounds suspiciously like the model version of outsourcing. Big models pushing off tasks to smaller "narrowly skilled" models for effort savings.
     
  12. fl0at_

    fl0at_ Humorless, asinine, joyless pr*ck

    I have thoughts on this, but I'm told they waiver between religion on the high side, and cultish on the low (which I find is just a difference of scale).

    So I'll say, for me, you'll be fine. I have faith.
     
    Poppa T likes this.
  13. fl0at_

    fl0at_ Humorless, asinine, joyless pr*ck

    Yes. Your floor vacuum needs to understand that you want it to do a better job in the corners. But it doesn't need to know how to write a python script to interface with some random API. It can pass that request along to the moon and the stars, metaphorically speaking.
     
  14. IP

    IP Super Moderator

    A second up and a second down sandwiched between the processing time is not really an impediment. Wow, imagine being 5 or 10 seconds away from any sort of custom script. That's wild.
     
  15. VolDad

    VolDad Super Moderator

  16. NorrisAlan

    NorrisAlan Founder of the Mike Honcho Fan Club

    justingroves and zehr27 like this.
  17. lumberjack4

    lumberjack4 Chieftain

    gcbvol and VolDad like this.
  18. IP

    IP Super Moderator

    No, this is different . OpenAI was set up as a nonprofit open source enterprise. It was to be open source and not for profit, it is now closed source and for-profit.
     
  19. lumberjack4

    lumberjack4 Chieftain

    I want to say the open source side still exists, but now there is a lot of (most) effort to develop products as well. Is that enough to withstand scrutiny I don't know. But Musk is only interested in it from X.ai's perspective which is taking competitors off the street.
     
  20. IP

    IP Super Moderator

    I agree that this is likely his true motivation, but it is also true that their original mission did not include making closed products for private companies.
     

Share This Page