[GH-ISSUE #400] Multiple databases and localized data #2261

Closed
opened 2026-03-01 17:19:09 +03:00 by kerem · 8 comments
Owner

Originally created by @majkinetor on GitHub (May 7, 2023).
Original GitHub issue: https://github.com/1Remote/1Remote/issues/400

Originally assigned to: @VShawn on GitHub.

Once remote database is not accessible you can no longer use its connections. We should have a local copy of all remote databases. Connection availability doesn't necessarily have anything to do with database availability - for example, database may be down for administrative reasons, and people could still work with the last cached copy.

Detailed specification

Multiple databases

Create new local database for each remote database. This could be done in current local sqlite (1Remote.db) or in separated files (<Database>.db from Database settings. Separated sqlite make more sense since then your team member can send others his own local cache and connections will just work. This can happen if, for example, remote database is currently not available and you never connected to it before. With single sqlite database, he can't do that without sending others his private connection data too.

Having multiple databases provides more options even in no remote database scenario - you can for example separate connections in multiple local databases for whatever reason - keeping them grouped per project, customer (if you are consultant) etc. Implementing multiple databases for remote database will automatically enable this option too, with minimal additional development.

With multiple local databases, 1RM should keep metadata about last successful connection in its own database. Since all databases are cached in memory too, we now have potentially 3 places where database is: memory, local file system, remote. Only memory is always accessible since even local database can get locked by external process.

Local FS and remote can be marked as authority (memory can never be authority) which determines how 1RM deals with disconnection. So original 1Remote.db is authority and MySql remote db is authority but not is local copy. 1RM keeps in memory metadata about last successful connection and next connection-retry time to authority (for both local or remote databases). When it succeeds, it should also store that in the local FS database (be it authority or not) and disable connection-retry timer.

Once authority is not available, editing connections coming from it should be disabled (they become read-only, for example by disabling Save button in connection editor.

Localized data for remote connection

Based on discussion #392 there should be need for local alternation of remote connections. Original idea was for allowing custom username - team members will either have their own domain names or local OS names. Others mentioned that other connection attributes could be localized too, such as Script before connect which is a good point.

Given that one may create a local copy of remote connection, I think this should be optimized for the main use case - custom credentials. Default credential is IMO not flexible enough in most scenarios particularly in common case where team members get personalized credentials to remote resource. This means that we should probably allow for both - Create default credential in 1RM configuration (e.g. domain username) and allow associating remote connection with local credential. This could be done on first run when specifying empty username on remote connection. 1RM could for example add another connection field in the local cache.

Technical

Multiple databases

  1. Add new option in global configuration Databases - SQLite
    1. By default system comes with one default SQLite named 1Remote but user can add another one
  2. For each remote database create local database
    1. Just as with local, allow path to be specified for local cache, by default it should use local directory and and have the same name as Database
  3. By default, keep databases in db directory in order not to pollute root file system (no matter is it portable or roaming) and since we can have many databases now. This is the most natural place for it others will rarely touch.
    1. Add additional sugary option Open database directory.
  4. When connecting to remote database, add new local copy using SQLite.
  5. On the startup iterate all available local databases (be it authority or not) and add them to the connection list using groups
  6. Add UX options:
    1. Collapse/uncollapse all
    2. Number of connections next to the database name in the list (e.g. 1Remote [35, 3 selected])

Forks

  1. Add Local Json in Servers table in local copy of remote database
  2. Provide Save Local button apart from Save, and keep any edited differences there (1RM already has connection diff due to the bulk edit attribute comparison)
  3. Local configuration is always used over remote one, if it is defined
  4. Add an option option Clear Local that empties entire Local Json
    1. It would be good (but not mandatory) for one to have point of view UX - remote vs remote with local for any attribute that has local override so he can switch to see a difference. This feature could have localized reset (clear only that attribute from the Local Json, not everything).

This is basically a fork, and the main benefit is that connection attributes that are not locally touched can still be synced from remote and affect local database.

Domain credential

Regarding credentials, there is also a merit for connection maintainer to specify AD/LDAP option remotely as this is widely used - I would do that for number of my servers, specify that connection uses teams domain name. If we have such an option ( e.g. Use local domain credentials) one would still need to provide in 1RM configuration his local domain credentials as I think there is no way to find out his password other then mimikatz.

  1. Add in configuration editor a new option Use local domain credentials
  2. Provide new global configuration a new section Global credentials
    1. Make one predefined value Local domain credentials that user should setup
  3. If remote connection has option in 1. checked, use option in 2.1 if set, otherwise, ask for credentials on first connection
    1. User can choose any credential from global store or input a new one
    2. Remember credentials in Local Json if they are input by the user
      1. If global store credential is used, remember it by Name but not copy username/password in order for that user to be able to change them globally afterwards

Originally created by @majkinetor on GitHub (May 7, 2023). Original GitHub issue: https://github.com/1Remote/1Remote/issues/400 Originally assigned to: @VShawn on GitHub. Once remote database is not accessible you can no longer use its connections. We should have a local copy of all remote databases. Connection availability doesn't necessarily have anything to do with database availability - for example, database may be down for administrative reasons, and people could still work with the last cached copy. ## Detailed specification ### Multiple databases Create new local database for each remote database. This could be done in current local sqlite (`1Remote.db`) or in separated files (`<Database>.db` from Database settings. Separated sqlite make more sense since then your team member can send others his own local cache and connections will just work. This can happen if, for example, remote database is currently not available and you never connected to it before. With single sqlite database, he can't do that without sending others his private connection data too. Having multiple databases provides more options even in no remote database scenario - you can for example separate connections in multiple local databases for whatever reason - keeping them grouped per project, customer (if you are consultant) etc. Implementing multiple databases for remote database will automatically enable this option too, with minimal additional development. With multiple local databases, 1RM should keep metadata about last successful connection in its own database. Since all databases are cached in memory too, we now have potentially 3 places where database is: memory, local file system, remote. Only memory is always accessible since even local database can get locked by external process. Local FS and remote can be marked as *authority* (memory can never be authority) which determines how 1RM deals with disconnection. So original 1Remote.db is authority and MySql remote db is authority but not is local copy. 1RM keeps in memory metadata about last successful connection and next connection-retry time to authority (for both local or remote databases). When it succeeds, it should also store that in the local FS database (be it authority or not) and disable connection-retry timer. Once authority is not available, editing connections coming from it should be disabled (they become read-only, for example by disabling *Save* button in connection editor. ### Localized data for remote connection Based on discussion #392 there should be need for local alternation of remote connections. Original idea was for allowing custom username - team members will either have their own domain names or local OS names. Others mentioned that other connection attributes could be localized too, such as *Script before connect* which is a good point. Given that one may create a local copy of remote connection, I think this should be optimized for the main use case - custom credentials. Default credential is IMO not flexible enough in most scenarios particularly in common case where team members get personalized credentials to remote resource. This means that we should probably allow for both - Create default credential in 1RM configuration (e.g. domain username) and allow associating remote connection with local credential. This could be done on first run when specifying empty username on remote connection. 1RM could for example add another connection field in the local cache. ## Technical ### Multiple databases 1. Add new option in global configuration Databases - `SQLite` 1. By default system comes with one default SQLite named `1Remote` but user can add another one 1. For each remote database create local database 1. Just as with local, allow path to be specified for local cache, by default it should use local directory and and have the same name as Database 1. By default, keep databases in `db` directory in order not to pollute root file system (no matter is it portable or roaming) and since we can have many databases now. This is the most natural place for it others will rarely touch. 1. Add additional sugary option `Open database directory`. 1. When connecting to remote database, add new local copy using SQLite. 1. On the startup iterate all available *local* databases (be it authority or not) and add them to the connection list using groups 1. Add UX options: 1. Collapse/uncollapse all 1. Number of connections next to the database name in the list (e.g. `1Remote [35, 3 selected]`) ### Forks 1. Add `Local Json` in `Servers` table in local copy of remote database 1. Provide `Save Local` button apart from `Save`, and keep any edited *differences* there (1RM already has connection diff due to the bulk edit attribute comparison) 1. Local configuration is always used over remote one, if it is defined 1. Add an option option `Clear Local` that empties entire `Local Json` 1. It would be good (but not mandatory) for one to have *point of view* UX - *remote* vs *remote with local* for any attribute that has local override so he can switch to see a difference. This feature could have localized reset (clear only that attribute from the Local Json, not everything). This is basically a fork, and the main benefit is that connection attributes that are not locally touched can still be synced from remote and affect local database. ### Domain credential Regarding credentials, there is also a merit for connection maintainer to specify AD/LDAP option remotely as this is widely used - I would do that for number of my servers, specify that connection uses teams domain name. If we have such an option ( e.g. *Use local domain credentials*) one would still need to provide in 1RM configuration his local domain credentials as I think there is no way to find out his password other then [mimikatz](https://adsecurity.org/?page_id=1821). 1. Add in configuration editor a new option `Use local domain credentials` 1. Provide new global configuration a new section `Global credentials` 1. Make one predefined value `Local domain credentials` that user should setup 1. If remote connection has option in 1. checked, use option in 2.1 if set, otherwise, ask for credentials on first connection 1. User can choose any credential from global store or input a new one 1. Remember credentials in `Local Json` if they are input by the user 1. If global store credential is used, remember it by `Name` but not copy username/password in order for that user to be able to change them globally afterwards ---
Author
Owner

@VShawn commented on GitHub (May 9, 2023):

Create new local database for each remote database.

Creating a new local database for each remote database can compromise security, as it may allow users with read-only permissions (who cannot view the connection password stored in MySQL) to access passwords if data is cached in the local database.

send others his own local cache ... This can happen if, for example, remote database is currently not available

Sending cache between users sounds very strange.

And most of the data stored in MySQL is related to the team's connections, which are likely to be internal IP addresses. When a database connection is lost, it is possible that a user has left the Wi-Fi range of the company. In this case, it is reasonable to hide all connections after the database connection is lost.

<!-- gh-comment-id:1539230409 --> @VShawn commented on GitHub (May 9, 2023): > Create new local database for each remote database. Creating a new local database for each remote database can compromise security, as it may allow users with read-only permissions (who cannot view the connection password stored in MySQL) to access passwords if data is cached in the local database. > send others his own local cache ... This can happen if, for example, remote database is currently not available Sending cache between users sounds very strange. And most of the data stored in MySQL is related to the team's connections, which are likely to be internal IP addresses. When a database connection is lost, it is possible that a user has left the Wi-Fi range of the company. In this case, it is reasonable to hide all connections after the database connection is lost.
Author
Owner

@majkinetor commented on GitHub (May 9, 2023):

Creating a new local database for each remote database can compromise security, as it may allow users with read-only permissions (who cannot view the connection password stored in MySQL) to access passwords if data is cached in the local database.

What do you mean? You already have access to remote database. I didn't think exact copy, but the copy of the stuff you already can access and view. If you can't access a password, you wont be able to access it in your local cache too.

Sending cache between users sounds very strange.

I don't think so. It might be cache, but its still valid database. There is no security issue in either way because user that wants to do that, can do that anyway, as he has access. Contrary, if remote db is down, and you know that your team member has accesss to it or will get it anyway, you can just send him the database and enable him to work on connections.

And most of the data stored in MySQL is related to the team's connections, which are likely to be internal IP addresses.

Thats assumption. Users will do that for sure. Also, server or database itself may be down (and will be down during maintenance), not the VPN to the company infrastructure.

When a database connection is lost, it is possible that a user has left the Wi-Fi range of the company. In this case, it is reasonable to hide all connections after the database connection is lost.

Like I said, user that wants to share company secrets will do that while he is online. You can't stop that. You can make his decision just a little bit easier or little bit harder, but neither thing will stop him if there is a will. However, IMO, we miss bigger oportunity here, for really cool features that many will certainly use:

  1. If you want to use connection when database admin fucked up everything, you can do it. It can even serve as backup.
  2. If you want to fork remote connection, this allows it, and you still follow the remote changes of locally unchanged stuff or even revert (like typical git branch)
  3. And since you already have multiple local databases, why not adopt it all the way, even for multiple local files (it changes nothing in implementation, its just more stuff for the same amount of work)

If you are still concerned about "security", you can make some additional options, like remote db setting that 1RM will look into to decide if databas can be cached locally or not. You already expressed concerns about passwords before (#317) and this is not different. "It is reasonable to hide all connections after the database connection is lost" is not security, its wishfull thinking. I will grab things from RAM memory for sure, or will just go each connection and print screen it (even without 1 minute of automation).

<!-- gh-comment-id:1540007418 --> @majkinetor commented on GitHub (May 9, 2023): > Creating a new local database for each remote database can compromise security, as it may allow users with read-only permissions (who cannot view the connection password stored in MySQL) to access passwords if data is cached in the local database. What do you mean? You already have access to remote database. I didn't think _exact_ copy, but the copy of the stuff you already can access and view. If you can't access a password, you wont be able to access it in your local cache too. > Sending cache between users sounds very strange. I don't think so. It might be cache, but its still valid database. There is no security issue in either way because user that wants to do that, can do that anyway, as he has access. Contrary, if remote db is down, and you know that your team member has accesss to it or will get it anyway, you can just send him the database and enable him to work on connections. > And most of the data stored in MySQL is related to the team's connections, which are likely to be internal IP addresses. Thats assumption. Users will do that for sure. Also, server or database itself may be down (and will be down during maintenance), not the VPN to the company infrastructure. > When a database connection is lost, it is possible that a user has left the Wi-Fi range of the company. In this case, it is reasonable to hide all connections after the database connection is lost. Like I said, user that wants to share company secrets will do that while he is online. You can't stop that. You can make his decision just a little bit easier or little bit harder, but neither thing will stop him if there is a will. However, IMO, we miss bigger oportunity here, for really cool features that many will certainly use: 1. If you want to use connection when database admin fucked up everything, you can do it. It can even serve as backup. 1. If you want to fork remote connection, this allows it, and you still follow the remote changes of locally unchanged stuff or even revert (like typical git branch) 1. And since you already have multiple local databases, why not adopt it all the way, even for multiple local files (it changes nothing in implementation, its just more stuff for the same amount of work) If you are still concerned about "security", you can make some additional options, like remote db setting that 1RM will look into to decide if databas can be cached locally or not. You already expressed concerns about passwords before (#317) and this is not different. _"It is reasonable to hide all connections after the database connection is lost"_ is not security, its wishfull thinking. I will grab things from RAM memory for sure, or will just go each connection and print screen it (even without 1 minute of automation).
Author
Owner

@VShawn commented on GitHub (May 10, 2023):

I once again believe that if the reason is "database may be down," caching online mysql data to a local sqlite is pointless:

  1. Firstly, it's the responsibility of the DBA to ensure the stability of the online database, rather than relying on 1RM to consider how to restore data if it get messed up. backup of databases is something every DBA should do.
  2. Since the online database is stable, caching it to a local file to make ues of the data becomes meaningless. The only situation where the database goes offline is when the user disconnects from the network, and it's natural not to be able to use the resources on the network.
  3. There is a potential data consistency issue: in extreme cases, an admin who has been offline for a long time could overwrite the latest online data with old cached data due to a mistake editing. Unless we build a tool like Git for branch management, which I think it's overkill.
  4. And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. And at least it's difficult to read psw from memory. but once they are cached to the local database, even a computer novice can copy the password from local database out (assuming your goal is to allow users to use connections when db offline, then you must cache passwords).

Finally, suppose a scenario where I bring my own laptop to work during the day and continue personal development on it at home in the evening. If I use your solution, then my computer would cache the company's data locally, but I definitely don't want to face more than 30 connections from the company after I get home. Therefore, after the online database goes offline, its projects should be hidden.

Of course, the local cache you proposed has other meanings, such as allowing each client to customize using their account password to connect to a server based on the data in the online database. The issue may be on how to handle conflicts between the online database and local data, such as when a connection in MySQL has already provided an account & password; how to add personal account passwords locally for it and how to modify the account password in MySQL after adding a local account password conflict to the online one, and how to switch between them. I think these issues can only be resolved through well-thought-out UI design which I have no good design yet. We may need some flowcharts and prototype diagrams for discussion to avoid any misunderstandings of our views.

<!-- gh-comment-id:1541503794 --> @VShawn commented on GitHub (May 10, 2023): I once again believe that if the reason is "database may be down," caching online mysql data to a local sqlite is pointless: 1. Firstly, it's the responsibility of the DBA to ensure the stability of the online database, rather than relying on 1RM to consider how to restore data if it get messed up. backup of databases is something every DBA should do. 2. Since the online database is stable, caching it to a local file to make ues of the data becomes meaningless. The only situation where the database goes offline is when the user disconnects from the network, and it's natural not to be able to use the resources on the network. 3. There is a potential data consistency issue: in extreme cases, an admin who has been offline for a long time could overwrite the latest online data with old cached data due to a mistake editing. Unless we build a tool like Git for branch management, which I think it's overkill. 4. And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. And at least it's difficult to read psw from memory. but once they are cached to the local database, even a computer novice can copy the password from local database out (assuming your goal is to allow users to use connections when db offline, then you must cache passwords). Finally, suppose a scenario where I bring my own laptop to work during the day and continue personal development on it at home in the evening. If I use your solution, then my computer would cache the company's data locally, but I definitely don't want to face more than 30 connections from the company after I get home. Therefore, after the online database goes offline, its projects should be hidden. Of course, the local cache you proposed has other meanings, such as allowing each client to customize using their account password to connect to a server based on the data in the online database. The issue may be on how to handle conflicts between the online database and local data, such as when a connection in MySQL has already provided an account & password; how to add personal account passwords locally for it and how to modify the account password in MySQL after adding a local account password conflict to the online one, and how to switch between them. I think these issues can only be resolved through well-thought-out UI design which I have no good design yet. We may need some flowcharts and prototype diagrams for discussion to avoid any misunderstandings of our views.
Author
Owner

@majkinetor commented on GitHub (May 10, 2023):

Firstly, it's the responsibility of the DBA to ensure the stability of the online database, rather than relying on 1RM to consider how to restore data if it get messed up. backup of databases is something every DBA should do.

What does that even mean? You want to educate the world on how to be succesifull DBA? :) Its respoinsiblity of the application to be resilient and to work, if it can work.

there is a potential data consistency issue

Easily solvable by looking into last change date of the database, and if it is in the past compared to the cache, you ditch the cache and recreate it (after all, its just a cache). This will make forks lost but nothing is perfect and this shouldn't be frequent thing, if at all (but easily fixable even in that case, but I wouldn't go there).

Since the online database is stable

Database may be down for maintaince on some places regularly (like database updates, security patches etc.). But that is irrelevant. I don't assume the database will be constantly down, but VPN connection is definitelly consntatnly down. With reconnect we have now this is less of a problem, but still, you can't use database while VPN is down, and that is artificial limitation. Scenario is VERY easy to come up with - what if I want to keep cloud connections in my local database for example. Amazon/Google almost always work while your on premise database or VPN may not.

And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. ... And at least it's difficult to read psw from memory.

Its not that difficult, this is security via obscurity which doesn't solve security issue.

If I use your solution, then my computer would cache the company's data locally, but I definitely don't want to face more than 30 connections from the company after I get home.

Like I said, you can forbid cache on remote side if that is your concern, and all problems solved. Its enough to put something in config schema like cache: not allowed and we are done. You can do that actually from both side: both user and remote db can decide on their own. Why enforce that to users that have better things to do or do not care or have different security protections. The main point is, you don't want to decide on users but give them options and let them decide. In your 30 connections case, I can simply choose not to cache that db, I can collapse it so not to view it, or I can delete local cache if I accicdently used it, and all take 1 sec to do.

Of course, the local cache you proposed has other meanings, such as allowing each client to customize using their account password to connect to a server based on the data in the online database.

Not only a password, it can be done without it, but everything. You proposed changing everything may have a value, I am just thinking on how to do that :)

The issue may be on how to handle conflicts between the online database and local data, such as when a connection in MySQL has already provided an account & password;

Like I stated in original post, there is no conflict - if you have local fork, it has priority, even if remote data changes. I proposed we can have visibility on changes by having 1RM show exclamation marks when it detects it, but that is really nitpicking, as how hard can it be to just reset your local changes and do them again in single connection. But easy to do anyway, and you already implemented connection diff for bulk changes.

So let me bring the benefits again, that may be lost in discussion:

  1. Having multiple local databases is IMO great feature - perhaps you are consultant and want to keep different company stuff separated - perhaps you are organisational freak, perhaps some stuff are more important to you then others and have different secruity standards (lets say you want to keep it on encrypted disk that might not be always available), who cares. Those features can get improved a lot too if we ever want that (config options for various stuff, like don't allow bulk updates across databases, show only current database tags when autocompleting tags on connection etc.)
  2. If we support multiple local database, we are 1 afternoon away from having local cache of remote db and with it features like constant conectivity, easy backup and "forks"
  3. The system is consistent and easy to understand. I think its also probably not that big effort to implement, but you tell me. I would give it a week or so. Everything else is UX we can find solution for.

I am not pushing for this solution too, its currently just a suggestion, but I think I like it 😃

<!-- gh-comment-id:1542090302 --> @majkinetor commented on GitHub (May 10, 2023): > Firstly, it's the responsibility of the DBA to ensure the stability of the online database, rather than relying on 1RM to consider how to restore data if it get messed up. backup of databases is something every DBA should do. What does that even mean? You want to educate the world on how to be succesifull DBA? :) Its respoinsiblity of the application to be **resilient** and to work, if it can work. > there is a potential data consistency issue Easily solvable by looking into last change date of the database, and if it is in the past compared to the cache, you ditch the cache and recreate it (after all, its just a cache). This will make forks lost but nothing is perfect and this shouldn't be frequent thing, if at all (but easily fixable even in that case, but I wouldn't go there). > Since the online database is stable Database may be down for maintaince on some places regularly (like database updates, security patches etc.). But that is irrelevant. I don't assume the database will be constantly down, but VPN connection is definitelly consntatnly down. With reconnect we have now this is less of a problem, but still, you can't use database while VPN is down, and that is artificial limitation. Scenario is VERY easy to come up with - what if I want to keep cloud connections in my local database for example. Amazon/Google almost always work while your on premise database or VPN may not. > And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. ... And at least it's difficult to read psw from memory. Its not that difficult, this is security via obscurity which doesn't solve security issue. >If I use your solution, then my computer would cache the company's data locally, but I definitely don't want to face more than 30 connections from the company after I get home. Like I said, you can forbid cache on remote side if that is your concern, and all problems solved. Its enough to put something in config schema like `cache: not allowed` and we are done. You can do that actually from both side: both user and remote db can decide on their own. Why enforce that to users that have better things to do or do not care or have different security protections. **The main point is, you don't want to decide on users but give them options and let them decide**. In your 30 connections case, I can simply choose not to cache that db, I can collapse it so not to view it, or I can delete local cache if I accicdently used it, and all take 1 sec to do. > Of course, the local cache you proposed has other meanings, such as allowing each client to customize using their account password to connect to a server based on the data in the online database. Not only a password, it can be done without it, but everything. You proposed changing everything may have a value, I am just thinking on how to do that :) > The issue may be on how to handle conflicts between the online database and local data, such as when a connection in MySQL has already provided an account & password; Like I stated in original post, there is no conflict - if you have local fork, it has priority, even if remote data changes. I proposed we can have visibility on changes by having 1RM show exclamation marks when it detects it, but that is really nitpicking, as how hard can it be to just reset your local changes and do them again in single connection. But easy to do anyway, and you already implemented connection diff for bulk changes. So let me bring the benefits again, that may be lost in discussion: 1. Having multiple local databases is IMO great feature - perhaps you are consultant and want to keep different company stuff separated - perhaps you are organisational freak, perhaps some stuff are more important to you then others and have different secruity standards (lets say you want to keep it on encrypted disk that might not be always available), who cares. Those features can get improved a lot too if we ever want that (config options for various stuff, like don't allow bulk updates across databases, show only current database tags when autocompleting tags on connection etc.) 2. If we support multiple local database, we are 1 afternoon away from having local cache of remote db and with it features like constant conectivity, easy backup and "forks" 3. The system is consistent and easy to understand. I think its also probably not that big effort to implement, but you tell me. I would give it a week or so. Everything else is UX we can find solution for. I am not pushing for this solution too, its currently just a suggestion, but I think I like it 😃
Author
Owner

@VShawn commented on GitHub (May 11, 2023):

You want to educate the world on how to be succesifull DBA? :)

I not intend to do this, but it do the responsibility of the DBA to ensure the the database wokrs.

And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. ... And at least it's difficult to read psw from memory.

Its not that difficult, this is security via obscurity which doesn't solve security issue.

Yes it did not solve security but local database make password security worse.

if you have local fork, it has priority

What you mean?

Like: yesterday, I got a local fork and made private customizations. Today, due to server IP migration, the HOST field on MySQL was modified by the admin. What will happen to the local fork at this point?

  1. If the fork is overwritten by the data on MySQL, then the customization project from yesterday will be lost. I believe that data loss is not acceptable.
  2. If the fork continues to remain unchanged, with an outdated HOST in it, and the local fork has a higher priority, I will never be able to connect to this connection.

Unless we record the update time of each field, any of the above paths will result in issues.


What I want to express is that this plan can be pushed forward. However, there are still many details that need to be considered, especially since I don't like resolving conflicts between local forks and remote repositories. Instead, I prefer to only store customized content locally (such as personal account passwords), and enable the "connect" feature to override remote configurations with the customized configurations stored locally.

Additionally, it may be my misunderstanding, but in my view, the local cache you are currently designing doesn't seem to be significantly different from synchronizing a shared SQLite through sync for a team: both of them can keep work when the online service is down, and we can also make side configurations for the customize.

What I expect fot is:

  • After 1RM is launched, it should connect to MySQL successfully at least once before being able to use the connections in MySQL. It should be able to use after disconnecting (this has already done in #395).
  • The local cache only store customized content rather than all forks.

BTW

If we support multiple local database,

We did have multiple local database supported, I just hide the option

image

like don't allow bulk updates across databases

Well it allow bulk edit across databases now :)

<!-- gh-comment-id:1543062001 --> @VShawn commented on GitHub (May 11, 2023): > You want to educate the world on how to be succesifull DBA? :) I not intend to do this, but it do the responsibility of the DBA to ensure the the database wokrs. >> And security issues: currently users with read-only permission can only load server passwords into memory (encrypted strings), they can use it for connect but can not see the plaintext. ... And at least it's difficult to read psw from memory. > Its not that difficult, this is security via obscurity which doesn't solve security issue. Yes it did not solve security but local database make password security worse. > if you have local fork, it has priority What you mean? Like: yesterday, I got a local fork and made private customizations. Today, due to server IP migration, the HOST field on MySQL was modified by the admin. What will happen to the local fork at this point? 1. If the fork is overwritten by the data on MySQL, then the customization project from yesterday will be lost. I believe that data loss is not acceptable. 2. If the fork continues to remain unchanged, with an outdated HOST in it, and the local fork has a higher priority, I will never be able to connect to this connection. Unless we record the update time of each field, any of the above paths will result in issues. --- What I want to express is that this plan can be pushed forward. However, there are still many details that need to be considered, especially since I don't like resolving conflicts between local forks and remote repositories. Instead, I prefer to only store customized content locally (such as personal account passwords), and enable the "connect" feature to override remote configurations with the customized configurations stored locally. Additionally, it may be my misunderstanding, but in my view, the local cache you are currently designing doesn't seem to be significantly different from synchronizing a shared SQLite through sync for a team: both of them can keep work when the online service is down, and we can also make side configurations for the customize. What I expect fot is: - After 1RM is launched, it should connect to MySQL successfully at least once before being able to use the connections in MySQL. It should be able to use after disconnecting (this has already done in #395). - The local cache only store customized content rather than all forks. --- ## BTW > If we support multiple local database, We did have multiple local database supported, I just hide the option ![image](https://github.com/1Remote/1Remote/assets/10143738/7cd78869-30e3-4f83-ad1f-fd2611aa2a39) > like don't allow bulk updates across databases Well it allow bulk edit across databases now :)
Author
Owner

@majkinetor commented on GitHub (May 11, 2023):

Like: yesterday, I got a local fork and made private customizations. Today, due to server IP migration, the HOST field on MySQL was modified by the admin. What will happen to the local fork at this point?

Your local fork most definitelly didn't include the HOST field changes. It is not good idea to change those fields, why would you (although it would be possible). So lets say you chagend credentials, color, tag and before script (sounds tootally legit use case). Now if remote side has another color, AT ANY TIME, your own color takes presidence. The same with other attributes. You have to clear the field (or all fields) to get remote version again. Sounds OK, IMO. Especially if 1RM marks remote changes like this:

image

This would mean there are differences, by hovering a mouse above 1RM could offer option to reset this field (and also have 1 reset for all next to Save button). I beleive that this is easy to implement as you already did implement connection diff for bulk editing. Now, the problem is how to treat arrays, tags for example. Do you merge them with remote? But, I wouldn't do that. If you added your thing, the thing is locked for you only, even for arrays, just reset it and add your own stuff again if you want it, not a big deal.

Unless we record the update time of each field, any of the above paths will result in issues.

Not necessary, just 1 last change for entire connection (which we should have anyway). Then if remote last change date is newer then local (not counting your fork changes) you can diff the entire thing and show the differences.

Or it could even be less extensive, just show ! if ANY change is detected, but allow user to see remote connection somehow without local forks, then he can copy paste changed attributes to local fork for example. Different UX, but how much this feature will be used need to be measured. I wouldn't overcomplicate it until wee see usage in production (reminds me, we need to have telemetry system for that and some infrastructure, similar to MS store thing, but that is another story we will talk about in the future).

Additionally, it may be my misunderstanding, but in my view, the local cache you are currently designing doesn't seem to be significantly different from synchronizing a shared SQLite through sync for a team

You need to setup sync. This is out of the box without hessle.

The local cache only store customized content rather than all forks.

What do you mean all the forks? Customized content is the fork, by definition. You mean all the remote connections? You would keep in local cache only what I refered to 'Local JSON' (when I say fork, I refer to local changes on single connection, so forks are collection of changes, 1 for each changed remote connection=

So, how are you keeping remote db cache now, since it exists after first connection ?

<!-- gh-comment-id:1544125247 --> @majkinetor commented on GitHub (May 11, 2023): > Like: yesterday, I got a local fork and made private customizations. Today, due to server IP migration, the HOST field on MySQL was modified by the admin. What will happen to the local fork at this point? Your local fork most definitelly didn't include the HOST field changes. It is not good idea to change those fields, why would you (although it would be possible). So lets say you chagend credentials, color, tag and before script (sounds tootally legit use case). Now if remote side has another color, AT ANY TIME, your own color takes presidence. The same with other attributes. You have to clear the field (or all fields) to get remote version again. Sounds OK, IMO. Especially if 1RM marks remote changes like this: ![image](https://github.com/1Remote/1Remote/assets/85767/47fee710-8ffe-4e9f-a9c4-e16e68d93771) This would mean there are differences, by hovering a mouse above 1RM could offer option to reset this field (and also have 1 reset for all next to Save button). I beleive that this is easy to implement as you already did implement connection diff for bulk editing. Now, the problem is how to treat arrays, tags for example. Do you merge them with remote? But, I wouldn't do that. If you added your thing, the thing is locked for you only, even for arrays, just reset it and add your own stuff again if you want it, not a big deal. > Unless we record the update time of each field, any of the above paths will result in issues. Not necessary, just 1 last change for entire connection (which we should have anyway). Then if remote last change date is newer then local (not counting your fork changes) you can diff the entire thing and show the differences. Or it could even be less extensive, just show ! if ANY change is detected, but allow user to see remote connection somehow without local forks, then he can copy paste changed attributes to local fork for example. Different UX, but how much this feature will be used need to be measured. I wouldn't overcomplicate it until wee see usage in production (reminds me, we need to have telemetry system for that and some infrastructure, similar to MS store thing, but that is another story we will talk about in the future). > Additionally, it may be my misunderstanding, but in my view, the local cache you are currently designing doesn't seem to be significantly different from synchronizing a shared SQLite through sync for a team You need to setup sync. This is out of the box without hessle. > The local cache only store customized content rather than all forks. What do you mean _all the forks_? Customized content is the fork, by definition. You mean all the remote connections? You would keep in local cache only what I refered to 'Local JSON' (when I say fork, I refer to local changes on single connection, so forks are collection of changes, 1 for each changed remote connection= So, how are you keeping remote db cache now, since it exists after first connection ?
Author
Owner

@majkinetor commented on GitHub (May 11, 2023):

Well now to sumarize the story so far :)

  1. We actually have multiple local databases, lol, so unhide them and present them as groups the same as with remote
  2. We actually have local cache, just not mapped to local database, but somewhere...

Good story 🏅

<!-- gh-comment-id:1544178252 --> @majkinetor commented on GitHub (May 11, 2023): Well now to sumarize the story so far :) 1. We actually have multiple local databases, lol, so unhide them and present them as groups the same as with remote 2. We actually have local cache, just not mapped to local database, but somewhere... Good story 🏅
Author
Owner

@VShawn commented on GitHub (May 13, 2023):

yes, We actually have local cache in RAM, cache will be disposed after 1RM exit.

And 1RM have local cahced files:

image

ConnectionRecords.json keep the last coconnect time of each connections, in order to make launcher order by connect time.

Locality.json keep group order and group is expanded or not when app start

<!-- gh-comment-id:1546505996 --> @VShawn commented on GitHub (May 13, 2023): yes, We actually have local cache in RAM, cache will be disposed after 1RM exit. And 1RM have local cahced files: ![image](https://github.com/1Remote/1Remote/assets/10143738/81a528f4-3e8d-4998-af70-3608be607a52) ConnectionRecords.json keep the last coconnect time of each connections, in order to make launcher order by connect time. Locality.json keep `group order` and `group is expanded or not when app start`
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/1Remote#2261
No description provided.