New ask Hacker News story: Ask HN: How much do we understand about how ChatGPT works?
Ask HN: How much do we understand about how ChatGPT works?
2 by jelan | 0 comments on Hacker News.
I am overall skeptical on ML projects and mostly find myself excited about the potential outcomes but disappointed in the execution (if there is any, most of the time it just sounds good as a marketing tactic to throw ML buzzwords into whatever you are doing) But then every once in a while we get something really useful like ChatGPT that makes me challenge my overall negative assumptions on the space enough to want to understand more about what’s happening I’m wondering how much the creators of ChatGPT really understand what the model is doing and to what extent they are able to make changes to the model. My naive understanding is that most of the time the model is a black box that we understand the inputs to well, but are really only capable of observing what comes out of the box instead of being able to change how said box works itself How much programming is involved in creating something like this? I am mainly interested in these questions after reading about the Bitter lesson [1] and basically how trying to inject our human understanding of how a problem should be solved only limits how well a computer is able to solve the problem when it understands the objective. Are we getting to the point where we are just going to accept that we don’t understand how the solution works but in cases like ChatGPT the outcome is good enough to make us not worry too much about it? [1]: https://ift.tt/y5ISndu
2 by jelan | 0 comments on Hacker News.
I am overall skeptical on ML projects and mostly find myself excited about the potential outcomes but disappointed in the execution (if there is any, most of the time it just sounds good as a marketing tactic to throw ML buzzwords into whatever you are doing) But then every once in a while we get something really useful like ChatGPT that makes me challenge my overall negative assumptions on the space enough to want to understand more about what’s happening I’m wondering how much the creators of ChatGPT really understand what the model is doing and to what extent they are able to make changes to the model. My naive understanding is that most of the time the model is a black box that we understand the inputs to well, but are really only capable of observing what comes out of the box instead of being able to change how said box works itself How much programming is involved in creating something like this? I am mainly interested in these questions after reading about the Bitter lesson [1] and basically how trying to inject our human understanding of how a problem should be solved only limits how well a computer is able to solve the problem when it understands the objective. Are we getting to the point where we are just going to accept that we don’t understand how the solution works but in cases like ChatGPT the outcome is good enough to make us not worry too much about it? [1]: https://ift.tt/y5ISndu
Comments
Post a Comment