I think the reason we can say something like a LMM doesn't suffer is that it has no reward function and no punishment function, outside of training. Everything that we call 'suffering' is related to the release or not-release of reward chemicals in our brains. We feel bad to discourage us from creating the conditions that made us feel bad. We feel good to encourage us to create again the conditions that made us feel good. Generally this was been advantageous to our survival (less so in the modern world, but that's another discussion).
If a computer program lacks a pain mechanism it can't feel pain. All possible outcomes are equally joyous or equally painful. Machines that use networks with correction and training built in as part of regular functioning are probably something of a grey area- a sufficient complex network like that I think we could argue feels suffering under some conditions.
Why would you think it's easier? Pain/pleasure is a lot older in animals than language, which to me means it's probably been a lot more refined by evolution.
If a computer program lacks a pain mechanism it can't feel pain. All possible outcomes are equally joyous or equally painful. Machines that use networks with correction and training built in as part of regular functioning are probably something of a grey area- a sufficient complex network like that I think we could argue feels suffering under some conditions.