I don't know if I am missing something, but I think the easiest solution
overall is to have a local copy of your pictures in your laptop, and regularly sync them. And keep a local copy of the database (it will update itself when changes to the library are detected anyway). Then, when you are home (or somewhere where you can access your network), use a software to sync the libraries (I use Unison, it's based on rsync), and call it a day. I did that a couple years ago to keep a local copy of my pictures in my laptop so managing them would be faster, and each night I sync'd the changes (bidirectionally). It worked quite well. -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
woenx wrote
> And keep a local copy of the database (it will update > itself when changes to the library are detected anyway). What I did not make clear is that I do not generally copy the tagging information to the photos themselves, because in doing a lot tagging, I do not want to have to copy all the photos over and over. That may be considered bad practice, but I really don't like having to copy a lot more than necessary. Everyone once in a while I will sync this data into the photos, but not each time. The reason for this is I want people doing this (who I am organized this for) to be able to be very comfortable doing taggings of 1000's of pictures, undoing it, etc, without being concerned that they are going to have to copy 1000's of pictures from one machine to another, as opposed to the small amount of tagging information. Hence what I want is a way to be able to just copy a quantity of information that is close in size to the amount of change to the database and photos. In the case of tagging, this is a small fixed amount for each new tag, roughly a row in the Tags and TagTree table, and for each tagging a row in the ItemTags table. Mutatis mutandis for labels, captions, etc. So one needs to in effect move the database from one machine to the other. This is not just a file copy because the databases are different files, because they point to different spots with the AlbumRoots. As far as I can see this should be the only difference, so I should be able to just change the row in the AlbumRoots table for the (sole) collection which again, is also a very efficient operation. Digikam starts up after this, but it examines each photo and takes a lot longer than it seems that it should. I tried to post a more detailed message about this with an attempt at a better description but it was apparently rejected as spam... -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
BensonBear wrote
> Hence what I want is a way to be able to just copy a quantity of > information > that is close in size to the amount of change to the database and photos. > In the case of tagging, this is a small fixed amount for each new tag, > roughly a row in the Tags and TagTree table, and for each tagging a row in > the ItemTags table. Mutatis mutandis for labels, captions, etc. Wouldn't just writing changes to sidecar files instead of the pictures (thus, leaving pictures intact) achieve what you just want? -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
woenx wrote
> Wouldn't just writing changes to sidecar files instead of the pictures > (thus, leaving pictures intact) achieve what you just want? I don't know, I have never used sidecars, and I guess I could try. However, at the outset I am skeptical, since this involves a huge number of extra files, and they are not all small necessarily since many files have a lot of metadata (perhaps they can be limited to the metadata generated through user actions of tagging). It seems much simpler to have to move only one file and alter it to point to a new location. I will look into using sidecars, but at this point I would just like to know, why cannot I just alter the AlbumRoot and then use the database from the other machine. (Note: this would be the same question on one machine, if I wanted to move the the location of the (sole) collection I have, even just "mv" it by changing its name. If only the name of the sole collection changes, why should I not be able to simply alter the database by changing the location of this collection? It seems like this change should be totally transparent. If one is just tagging, labelling, etc, a large collection, on two machines at different times, it seems generally simply and faster to copy one large file than a large number of small files, which then require a rebuilding of the database. Let's say people find this a bad idea; I will still like to know why it seems to not be possible without rebuilding the database once it is set to point to a different AlbumRoot. -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
BensonBear wrote
> If only the name of the sole > collection changes, why should I not be able to simply alter the database > by > changing the location of this collection? It seems like this change > should > be totally transparent. Yep, I'm with you in that one. The change should be transparent, without the need to re-scan the whole library. I don't know what that happens. Maybe a developer can chime in? But in your case, the changes are only applied to the database, since the metadata is not updated either in the pictures themselves or digikam files. What if someday you decide to use another picture manager? All these changes would be lost. Well, you could always write that metadata to the images or sidecars at some point in that case, but still. In my case, parts of my collection is shared among relatives, and they would be able to manage their collection using whatever software they like. Saving changes in pictures themselves (or sidecars) allows that. Although I agree with you that ideally pictures shouldn't be constantly written with new metadata, as there's always the risk of corrupting an image (and it's slower if you do it over a network), but I tend to leave the RAW pictures intact, and work only with their JPG counterparts. Also, regular backups, of course. -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
woenx wrote
> Yep, I'm with you in that one. The change should be transparent, without > the > need to re-scan the whole library. I don't know what that happens. Maybe a > developer can chime in? I believe I found the problem (at least, it is *a* problem that contributes): Because of the way I was moving the image files from the original windows machine to a linux machine (windows built in ftp, plus wget), the times on the files were wrong. The seconds field had been zeroed out. If the times are restored to their exact values, the method seems to work with no problem. Now for future actual use, I am not sure what is best. Probably to find some rsync server for windows, or perhaps unison. For now, I just reset the filetimes when they are first copied via wget using the times in the digikam database. That's fast. > But in your case, the changes are only applied to the database, since the > metadata is not updated either in the pictures themselves or digikam > files. > What if someday you decide to use another picture manager? All these > changes > would be lost. Well, you could always write that metadata to the images or > sidecars at some point in that case, but still. Exactly, so this is no problem at all. > In my case, parts of my > collection is shared among relatives, and they would be able to manage > their > collection using whatever software they like. In this case, indeed, it would be a problem. > Although I agree with you that ideally pictures shouldn't be constantly > written with new metadata, as there's always the risk of corrupting an > image > (and it's slower if you do it over a network) I find it slow regardless, if one is tagging large numbers of files. I kind of object to it in principle. I kind of think the origin of the data should never be destroyed or altered; they could go on read only media. Most of the cameras we use do not shoot raw, so that "origin" is jpegs or something like that. -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
Le lun. 30 sept. 2019 à 07:35, BensonBear <[hidden email]> a écrit : woenx wrote
Windows 10 come with a SSH server embeded, ported from openSSH. It's not perfect but it work with SFTP. I use at work to syncronize source codes in a Windows 10 VM hosted in a Linux. Best Gilles Caulier |
Gilles Caulier-4 wrote
> Windows 10 come with a SSH server embeded, ported from openSSH. It's not > perfect but it work with SFTP. I use at work to syncronize source codes in > a Windows 10 VM hosted in a Linux. Thanks, I guess I need an ssh server before I can use something like unison or rsync. So far I have tried for a few hours to get the ssh server running on Windows 10 with no luck. Pretty much ready to give up and just always copy files from a removable usb hard drive. -- Sent from: http://digikam.1695700.n4.nabble.com/digikam-users-f1735189.html |
Even if the ssh server is installed you need to turn on 2 services on the system to see the ssh port open on the network. Better, if you reboot the Windows, you need to do it another time. The system do not save the configuration. Voilà, it's Microsoft trying to reproduce Linux features with success. Gilles Le lun. 30 sept. 2019 à 09:48, BensonBear <[hidden email]> a écrit : Gilles Caulier-4 wrote |
Free forum by Nabble | Edit this page |