• 0 Posts
  • 6 Comments
Joined 25 days ago
cake
Cake day: June 8th, 2024

help-circle


  • I’m not actually asking for good faith answers to these questions. Asking seems the best way to illustrate the concept.

    Does the programmer fully control the extents of human meaning as the computation progresses, or is the value in leveraging ignorance of what the software will choose?

    Shall we replace our judges with an AI?

    Does the software understand the human meaning in what it does?

    The problem with the majority of the AI projects I’ve seen (in rejecting many offers) is that the stakeholders believe they’ve significantly more influence over the human meaning of the results than exists in the quality and nature of the data they’ve access to. A scope of data limits a resultant scope of information, which limits a scope of meaning. Stakeholders want to break the rules with “AI voodoo”. Then, someone comes along and sells the suckers their snake oil.







  • Assuming you’re coming from a linear programming and OOP background, then data (incl. SQL) kinda sucks because it’s not always clear how to apply existing concepts. But, doing so is absolutely critical to success, perhaps more so than in most OOP environments. Your post isn’t funny to me because I’d be laughing at you, not with you.

    If a variable is fucked, the first questions you should answer are, “Where’d it come from?” and “What’s its value along the way?”. That looks a lot different in Python than SQL. But, the troubleshooting concept is the same.

    If object definitions were replaced by table/query definitions in planning then you’d probably not have made the initial error. Again, it looks different. But, the concept is the same.