Thank you amigo.
In case you don't know it - I value your opinion highly. Over the years I've learnt from many of your educational posts in this forum; many of those I actually apply in practical situations.
People come and go, and I'm happy that 10 years later you're still here in this forum sharing enlightenment both to the experienced and unexperienced.
You also need to understand the nuances of the tar version and options being used both to create and to unpack the packages. if you've ever wondered why slackware uses an ancient version of tar (1.13) it's precisely because it did/does something which newer versions of tar did not. Someone did at tar-1.27 create an option to tar which does the 'old thing' which tar-1.13 did. The thing is to not overwrite links-to-directories with a real directory when unpacking the archives. The 'admin' should be able to use links to mount-points, for instance, at his will, without the installation of a package un-doing that.
Indeed, package management is a complex thing. It's easy on the surface but the devil is always in the details. I know of the issue you said above - we have been bitten more than once when a package overwrite a symlink with a directory; back during the days when we still use home-brew petget-compatible "fatdog-package-manager"; but I wasn't aware that older tar will preserve the symlink (or the reason why PatV sticks to the old tar).
Also, installpkg unpacks the archive first, and directly into place, before running the doinst.sh even. I once saw a user who complained that upgrading a package should not overwrite any old files...?? No clue there...
Indeed. Anything else will be slow. Actually, even the original installpkg is slow because it de-compresses the package multiple times. This is fine if we're talking about 500kb package, but when deploying 100MB wine package for example this is slow. I've modified installpkg so it only decompress once and keep it cached until installation is done.
My replacement packaging system (tpkg/tpm) allow also for pre-install, pre-uninstall and post-uninstall scripts. I experimented with first unpacking packages in a discreet location before moving them into the right place. But, just as with the slack pkgtools doing an install/remove/re-install sequence when upgrading, it becomes a long, slow process. The same scrutiny can be done by long-listing the tar archive before unpacking -and even unpacking just the install-scripts so that any pre-installation stuff can be done.
Interesting. When we moved from Fatdog 600 to 700 over 3 years ago, I looked for a new package manager that has the following criteria:
a) has separate CLI and GUI tools
b) has ability to pull from remote repo.
c) Repo maintenance is easy
d) In worst case situation when the tools are not available, you can unpack the package manually.
I couldn't find anything else other than pkgtools.
DEB is nice but complex. Same with RPM.
Repo maintenance for these two isn't straightforward too.
paco (now renamed to porg) is nice but it doesn't support remote repo.
pkgtools doesn't have remote capability but fortunately slapt-get handles that.
pkgtools also doesn't have GUI but gslapt fixes that.
I didn't remember if tpm/tpkg already exist then. But I did remember that I was considering srcpkg as the foundation for the Fatdog build system. But it didn't allow be to build packages inside chroot (that is, build using libraries in a chroot instead of the host libraries); so in the end I wrote our own. I'm glad I did because I learnt a lot of things along the way.
Listing the archive before installing is a good idea anyway -because it provides confirmation that the package is well-formed and complete. Anyway, the really critical points of installation and especially upgrade of any critical binaries, is when the links get destroyed by installpkg and then re-created by the doinst.sh.
Noted.
tpkg/tpm are now using links *in the archive* as the newer tar does the proper thing with them. PatV's decision to use doinst.sh scripts was owing in-part to the old tars' faulty behaviour when overwriting exisiting links.
pkgtools supports both links inside package, and via doinst.sh. In all my packages, I always has links inside the tarball and not in doinst.sh. I actually don't see the benefit of doing that; perhaps only useful when we update stuff like glibc or gcc libraries? The kind of stuff that pulls the rugs from under your feet?
Most packaging systems frown on having package installation run scripts because of security issues. Instead they use 'triggers' which cause the package installer to carry out the needed tasks -but only the tasks that it knows how to do -no arbitrary commands possible. This also the way the android app-installation process works. Each app contains its' own installer binary and installation script -but the script language is like the triggers system beacause it can only do a limited set of things.
Indeed. There is always a balance/trade-off between freedom and security. An unchecked doinst.sh script can easily destroy a working system (or worse). As Uncle Ben said - with great power comes great responsibility. But that's why we run as root
In Fatdog we have sandbox to mitigate this problem somewhat - if you don't trust a package, install it in the sandbox first.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]