What the application can and what it can’t - this question is off-topic.
Here is the genuineness verification design: I, as the applicaiton designer, design it in the way that will be difficult to spoof. To do so, I add attributes to files that surely exists in the system; the attribute’s data is a (big) numeric value calculated from app’s version, buyer’s ID, chosen system properties etc, - a number which is unique enough to identify the customer. The attribute’s name is obscured to make it more difficult to find which attribute controls this data; in additional, the application adds some random numbers in attributes to other files, so the hacker won’t know which attribute is the real check. (Surely enough, there may be quite a few “real” numbers, all calculated in different ways, as well as quite a few of the “barren”, “vain” ones, - it’s not a good idea to put all eggs in one backet). Upon startup of the application, a “real” attribute value is read (randomly chosen from the pool of “real” attributes), and corresponding value is recalculated from the available system data, they are compared. Simultaneously, the value read from the attribute is sent over the Network to a central server, which contains all “real” numbers received from the client upon installation (no user data, of course; the data that composes the numbers is not reversely calculable). Any of these checks may be performed several times upon need. After these comparisons, the program decides if it’s a genuine copy or a pirately distributed one, and acts accordingly. E.g., triggers full HDD erase :-P. Just kidding. Honestly, I see only one way to break this system: edit the binary application’s file and replace the “jump_if_zero” instruction representing the main decision “genuine / hacked” with “no_op” (or “unconditional jump”) instruction. Surely enough, this may be checked also, e.g. by hashing the contents of the application’s file, but this is well beyong the scope of this discussion.
To be sure that a potential hacker does not find these attributes, they are attached to the files which surely exist in the system. These are: the application’s files (duh), libbe.so, kernel file, Tracker file, Deskbar file, B_USER_SETTINGS_DIRECTORY/tracker/TrackerSettings, etc. (I won’t reveal all of them here, but you got the point). This way I ensure the application can’t be portable (see, due to the application’s nature, there’s no meaning for it to be portable) and can’t be distributed freely.
Is this way of action acceptable? Surely it is. I don’t compromise the system in any way, I don’t acquire or store the user’s data, I don’t open any security holes for the attacker to come through, I even don’t modify the system’s binaries. Heck, I don’t touch any file, I modify only the attributes which are not part of the files.
The difference here is between “User is god” system’s architecture to “User is slave”. The first one is Linux way, where any process running with root’s priorities can do anything, including modification of system files. The second way is Windows, where even the Administrator can’t touch some places in the system. However, surprisingly Linux is much more secure then Windows. Doesn’t it mean that Linux’ way of action is better?
I agree that user may be limited; however, I have strong objections to it. But not application. If performing any checks, then base it on user’s account properties, and don’t do it per application. And, of course, allow user to run applications under other user’s credentials.