mirror of
https://github.com/1Remote/1Remote.git
synced 2026-04-25 13:36:03 +03:00
[GH-ISSUE #400] Multiple databases and localized data #2261
Labels
No labels
area-configuration
area-ct-app
area-ct-rdp
area-ct-remoteapp
area-ct-ssh
area-ct-vnc
area-launcher
area-list
area-tags
area-teamwork
bug
chore
dependencies
general-build/ci
general-performance
general-refactor
general-security
general-supportive
general-ux
meta-documentation
meta-enhancement
meta-enhancement
meta-feature
meta-help-wanted
meta-unknown-error
priority-hi
priority-low
pull-request
question
resolution-duplicate
resolution-invalid
resolution-wontfix
stale
task-put-off
task-still-considering
task-working-in-progress
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/1Remote#2261
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @majkinetor on GitHub (May 7, 2023).
Original GitHub issue: https://github.com/1Remote/1Remote/issues/400
Originally assigned to: @VShawn on GitHub.
Once remote database is not accessible you can no longer use its connections. We should have a local copy of all remote databases. Connection availability doesn't necessarily have anything to do with database availability - for example, database may be down for administrative reasons, and people could still work with the last cached copy.
Detailed specification
Multiple databases
Create new local database for each remote database. This could be done in current local sqlite (
1Remote.db) or in separated files (<Database>.dbfrom Database settings. Separated sqlite make more sense since then your team member can send others his own local cache and connections will just work. This can happen if, for example, remote database is currently not available and you never connected to it before. With single sqlite database, he can't do that without sending others his private connection data too.Having multiple databases provides more options even in no remote database scenario - you can for example separate connections in multiple local databases for whatever reason - keeping them grouped per project, customer (if you are consultant) etc. Implementing multiple databases for remote database will automatically enable this option too, with minimal additional development.
With multiple local databases, 1RM should keep metadata about last successful connection in its own database. Since all databases are cached in memory too, we now have potentially 3 places where database is: memory, local file system, remote. Only memory is always accessible since even local database can get locked by external process.
Local FS and remote can be marked as authority (memory can never be authority) which determines how 1RM deals with disconnection. So original 1Remote.db is authority and MySql remote db is authority but not is local copy. 1RM keeps in memory metadata about last successful connection and next connection-retry time to authority (for both local or remote databases). When it succeeds, it should also store that in the local FS database (be it authority or not) and disable connection-retry timer.
Once authority is not available, editing connections coming from it should be disabled (they become read-only, for example by disabling Save button in connection editor.
Localized data for remote connection
Based on discussion #392 there should be need for local alternation of remote connections. Original idea was for allowing custom username - team members will either have their own domain names or local OS names. Others mentioned that other connection attributes could be localized too, such as Script before connect which is a good point.
Given that one may create a local copy of remote connection, I think this should be optimized for the main use case - custom credentials. Default credential is IMO not flexible enough in most scenarios particularly in common case where team members get personalized credentials to remote resource. This means that we should probably allow for both - Create default credential in 1RM configuration (e.g. domain username) and allow associating remote connection with local credential. This could be done on first run when specifying empty username on remote connection. 1RM could for example add another connection field in the local cache.
Technical
Multiple databases
SQLite1Remotebut user can add another onedbdirectory in order not to pollute root file system (no matter is it portable or roaming) and since we can have many databases now. This is the most natural place for it others will rarely touch.Open database directory.1Remote [35, 3 selected])Forks
Local JsoninServerstable in local copy of remote databaseSave Localbutton apart fromSave, and keep any edited differences there (1RM already has connection diff due to the bulk edit attribute comparison)Clear Localthat empties entireLocal JsonThis is basically a fork, and the main benefit is that connection attributes that are not locally touched can still be synced from remote and affect local database.
Domain credential
Regarding credentials, there is also a merit for connection maintainer to specify AD/LDAP option remotely as this is widely used - I would do that for number of my servers, specify that connection uses teams domain name. If we have such an option ( e.g. Use local domain credentials) one would still need to provide in 1RM configuration his local domain credentials as I think there is no way to find out his password other then mimikatz.
Use local domain credentialsGlobal credentialsLocal domain credentialsthat user should setupLocal Jsonif they are input by the userNamebut not copy username/password in order for that user to be able to change them globally afterwards@VShawn commented on GitHub (May 9, 2023):
Creating a new local database for each remote database can compromise security, as it may allow users with read-only permissions (who cannot view the connection password stored in MySQL) to access passwords if data is cached in the local database.
Sending cache between users sounds very strange.
And most of the data stored in MySQL is related to the team's connections, which are likely to be internal IP addresses. When a database connection is lost, it is possible that a user has left the Wi-Fi range of the company. In this case, it is reasonable to hide all connections after the database connection is lost.
@majkinetor commented on GitHub (May 9, 2023):
What do you mean? You already have access to remote database. I didn't think exact copy, but the copy of the stuff you already can access and view. If you can't access a password, you wont be able to access it in your local cache too.
I don't think so. It might be cache, but its still valid database. There is no security issue in either way because user that wants to do that, can do that anyway, as he has access. Contrary, if remote db is down, and you know that your team member has accesss to it or will get it anyway, you can just send him the database and enable him to work on connections.
Thats assumption. Users will do that for sure. Also, server or database itself may be down (and will be down during maintenance), not the VPN to the company infrastructure.
Like I said, user that wants to share company secrets will do that while he is online. You can't stop that. You can make his decision just a little bit easier or little bit harder, but neither thing will stop him if there is a will. However, IMO, we miss bigger oportunity here, for really cool features that many will certainly use:
If you are still concerned about "security", you can make some additional options, like remote db setting that 1RM will look into to decide if databas can be cached locally or not. You already expressed concerns about passwords before (#317) and this is not different. "It is reasonable to hide all connections after the database connection is lost" is not security, its wishfull thinking. I will grab things from RAM memory for sure, or will just go each connection and print screen it (even without 1 minute of automation).
@VShawn commented on GitHub (May 10, 2023):
I once again believe that if the reason is "database may be down," caching online mysql data to a local sqlite is pointless:
Finally, suppose a scenario where I bring my own laptop to work during the day and continue personal development on it at home in the evening. If I use your solution, then my computer would cache the company's data locally, but I definitely don't want to face more than 30 connections from the company after I get home. Therefore, after the online database goes offline, its projects should be hidden.
Of course, the local cache you proposed has other meanings, such as allowing each client to customize using their account password to connect to a server based on the data in the online database. The issue may be on how to handle conflicts between the online database and local data, such as when a connection in MySQL has already provided an account & password; how to add personal account passwords locally for it and how to modify the account password in MySQL after adding a local account password conflict to the online one, and how to switch between them. I think these issues can only be resolved through well-thought-out UI design which I have no good design yet. We may need some flowcharts and prototype diagrams for discussion to avoid any misunderstandings of our views.
@majkinetor commented on GitHub (May 10, 2023):
What does that even mean? You want to educate the world on how to be succesifull DBA? :) Its respoinsiblity of the application to be resilient and to work, if it can work.
Easily solvable by looking into last change date of the database, and if it is in the past compared to the cache, you ditch the cache and recreate it (after all, its just a cache). This will make forks lost but nothing is perfect and this shouldn't be frequent thing, if at all (but easily fixable even in that case, but I wouldn't go there).
Database may be down for maintaince on some places regularly (like database updates, security patches etc.). But that is irrelevant. I don't assume the database will be constantly down, but VPN connection is definitelly consntatnly down. With reconnect we have now this is less of a problem, but still, you can't use database while VPN is down, and that is artificial limitation. Scenario is VERY easy to come up with - what if I want to keep cloud connections in my local database for example. Amazon/Google almost always work while your on premise database or VPN may not.
Its not that difficult, this is security via obscurity which doesn't solve security issue.
Like I said, you can forbid cache on remote side if that is your concern, and all problems solved. Its enough to put something in config schema like
cache: not allowedand we are done. You can do that actually from both side: both user and remote db can decide on their own. Why enforce that to users that have better things to do or do not care or have different security protections. The main point is, you don't want to decide on users but give them options and let them decide. In your 30 connections case, I can simply choose not to cache that db, I can collapse it so not to view it, or I can delete local cache if I accicdently used it, and all take 1 sec to do.Not only a password, it can be done without it, but everything. You proposed changing everything may have a value, I am just thinking on how to do that :)
Like I stated in original post, there is no conflict - if you have local fork, it has priority, even if remote data changes. I proposed we can have visibility on changes by having 1RM show exclamation marks when it detects it, but that is really nitpicking, as how hard can it be to just reset your local changes and do them again in single connection. But easy to do anyway, and you already implemented connection diff for bulk changes.
So let me bring the benefits again, that may be lost in discussion:
I am not pushing for this solution too, its currently just a suggestion, but I think I like it 😃
@VShawn commented on GitHub (May 11, 2023):
I not intend to do this, but it do the responsibility of the DBA to ensure the the database wokrs.
Yes it did not solve security but local database make password security worse.
What you mean?
Like: yesterday, I got a local fork and made private customizations. Today, due to server IP migration, the HOST field on MySQL was modified by the admin. What will happen to the local fork at this point?
Unless we record the update time of each field, any of the above paths will result in issues.
What I want to express is that this plan can be pushed forward. However, there are still many details that need to be considered, especially since I don't like resolving conflicts between local forks and remote repositories. Instead, I prefer to only store customized content locally (such as personal account passwords), and enable the "connect" feature to override remote configurations with the customized configurations stored locally.
Additionally, it may be my misunderstanding, but in my view, the local cache you are currently designing doesn't seem to be significantly different from synchronizing a shared SQLite through sync for a team: both of them can keep work when the online service is down, and we can also make side configurations for the customize.
What I expect fot is:
BTW
We did have multiple local database supported, I just hide the option
Well it allow bulk edit across databases now :)
@majkinetor commented on GitHub (May 11, 2023):
Your local fork most definitelly didn't include the HOST field changes. It is not good idea to change those fields, why would you (although it would be possible). So lets say you chagend credentials, color, tag and before script (sounds tootally legit use case). Now if remote side has another color, AT ANY TIME, your own color takes presidence. The same with other attributes. You have to clear the field (or all fields) to get remote version again. Sounds OK, IMO. Especially if 1RM marks remote changes like this:
This would mean there are differences, by hovering a mouse above 1RM could offer option to reset this field (and also have 1 reset for all next to Save button). I beleive that this is easy to implement as you already did implement connection diff for bulk editing. Now, the problem is how to treat arrays, tags for example. Do you merge them with remote? But, I wouldn't do that. If you added your thing, the thing is locked for you only, even for arrays, just reset it and add your own stuff again if you want it, not a big deal.
Not necessary, just 1 last change for entire connection (which we should have anyway). Then if remote last change date is newer then local (not counting your fork changes) you can diff the entire thing and show the differences.
Or it could even be less extensive, just show ! if ANY change is detected, but allow user to see remote connection somehow without local forks, then he can copy paste changed attributes to local fork for example. Different UX, but how much this feature will be used need to be measured. I wouldn't overcomplicate it until wee see usage in production (reminds me, we need to have telemetry system for that and some infrastructure, similar to MS store thing, but that is another story we will talk about in the future).
You need to setup sync. This is out of the box without hessle.
What do you mean all the forks? Customized content is the fork, by definition. You mean all the remote connections? You would keep in local cache only what I refered to 'Local JSON' (when I say fork, I refer to local changes on single connection, so forks are collection of changes, 1 for each changed remote connection=
So, how are you keeping remote db cache now, since it exists after first connection ?
@majkinetor commented on GitHub (May 11, 2023):
Well now to sumarize the story so far :)
Good story 🏅
@VShawn commented on GitHub (May 13, 2023):
yes, We actually have local cache in RAM, cache will be disposed after 1RM exit.
And 1RM have local cahced files:
ConnectionRecords.json keep the last coconnect time of each connections, in order to make launcher order by connect time.
Locality.json keep
group orderandgroup is expanded or not when app start