-
Notifications
You must be signed in to change notification settings - Fork 5
Home
Module (package) containing implementation of the usual edit policies that are delegating to Logical Model implementation to get commands to be executed from the user request (create, move, drop, etc).
Module (package) containing the high level APIs that should be called when interacting with the semantic model and notation model. It should contain methods like: Command AddMessage(); Command DeleteMessage() Command moveMessage(); etc. These APIs should also provide some validation methods, as the computation for the feedback: can this element be move up/down, inserted there, etc.
Antonio: I don't have any strong opinion on the APIs returning commands or just performing the action, probably as the command needs the request in most of the cases, that in my opinion couple conceptually the UI layer with the control layer. For my it sound more natural having the policies to handle the command / request paradigm. By let the api return commands we have the risk to let the request comes to the API. That it may be a problem as the whole point of the API is to shield the smantical and graphical logic behind the UI and decouple from it. But as I said I don't have strong opinion about it, just I don't want see any GEF / GMF request coming in to the API. What we must not do is mirror the request types on this API, such re-attach anchors and so on... We need to keep the API high level, closed to how the end user see the actions, not who they are translated in request.
Remi: what should be the parameter kinds for these methods: UML2, Views or another 'logical' model, as proposed by Philip? In the later case, that would mean that a submodule would correspond to the model definition itself and a second to the 'edit' part. If not, we may rename this module to something like 'logical' only.
Antonio: I think it should be the UML objects as used as reference point, like the existing message to insert before / after, the fragments + message before / after to insert in, etc.. And, optionally, all the graphical information needed to perform the operation, such as y-coordinates of the anchor positions, etc...
Philip: Imho we should have an own "model" representing the full interaction, similar to what I started. This would make it easier than directly interaction with UML2. However, in my opinion this model should have references to the actual UML2 / notation elements. This model is supposed to be the "facade" hiding all the other modules (Layout, Semantic Graph, etc.), right? Do I understand this correctly, when assuming it may look as follows:
Command cmd = interaction.addMessageAfter(otherMessage);
cmd.showFeedback(); // visualizes whether the cmd.canExecute()
// if the user actually performs the action
if (cmd.canExecute()) {
cmd.execute(); // would use the other modules to 1) create semantic element, notation element, do re-layouting, etc.
}
Remi: For feedback, I would also propose a symetric API to the Add/Remove/move, as canAddMessage, can DeleteMessage(), etc which is similar by default to get the command, and ask for canExecute to check feasability. Antonio: Feedbacks are change based on the information provided in the request and they are provided by the edit policies. I see two issues here: 1) I don't like the API should know anything about the requests and 2) As the information needed to calculate the feedback, in most of the cases comes from the graph structure, I don't like make that available to the command directly, it couple things way too much. I think the API should provide the needed information for the policy to create and handle the graphical feedbacks, such as upper and lower limit where the element can be moved without start a reordering, lower limit of the previous group and upper limit of the next group, anchoring positions, etc...
Christian: I recommend not overloading the API with a mirror suite of
canXyz()
operations. If we are exposing all behaviour asCommand
s then clients should be expected to query those commands forcanExecute()
. Putting these operations on the Logical Model API would be redundant.Agree, if we use commands we should use the canExecute() methods in them.
Remi: Should the API already think in term of group of modifications => move a group of messages. This may have some impact on API definition and computation done during computation of the graph structure defined later.
Antonio: Correct, the API needs to think on terms of groups, always. That it is the whole point.
Philip: I would expect it to have an API similar to the semantic modifications, such as
Command c = parent.graphicallyMoveDown(message, 20)
and this command encapsulates all other shifting/moving of elements. If a user would select multiple elements and start moving them, I guess, there should be another method accepting multiple elements to be moved.Antonio:I don't like the idea to split graphical operation from semantical operation. We should have functions such as moveMessage(m, y1, y2), not mirroring any possible operation to the API but providing a little more generic methods, so once the API is defined, it can be stable. In the example the moveMessage() maybe used to move a horizontal message (y1=y2), move a receive delayed message (y1 < y2) or to change from one to another. It could also used to change its position in the semantical order. Internally, the implementation probably will delegate to different methods, but the exposed service should provide a little generic functions.
Philip: But if we don't have one method for graphical move and one for reordering, how can the implementation derive from this call, whether to execute a graphical move (implicitly pushing all subsequent elements downwards if the new new y is greater than the old) or a reordering (not pushing all subsequent elements downwards, but rather make room for the element in the new y position)?
Christian: Does it actually make sense to request the move of multiple elements in one API call? This diagram really does constrain visual elements to relative positioning, so that e.g. moving one interaction fragment usually implies moving all others below it. If there are cases for requesting multiple objects be moved by different degrees, then I would suggest that this too be accomplished by composition of commands. The
Command
implementations returned by the Logical Model API can self-compose into a smartCompoundCommand
that understands the layout constraints and potential conflicts between its composed commands that determine the overall (non-)executability of the compound.
Philip: Right, I agree. The only use case where it could make sense is when the move is a move of semantic elements (e.g., take these three messages and move them into a fragment).
Antonio: I disagree, you may select couple of unrelated message to move down, when for example you want to make place to insert a note or a comment. So, yes from usability point of view it may be convenient to have this possibility.
Remi: the usual decomposition of request into commands should be done there rather than browsing each edit policy with different requests if I understood well. Call to create an element API would be decomposed in 3 different steps => semantic creation, relying on element type framework, the view creation (view providers, graphical types registry, etc) and the layout/graphical updates (which are then delegated to the graph structure?). This would return a composite command to be executed.
Antonio: It should probably be more like semantic creations, view creations, I don't like the idea of invoking the layout afterwards. The view creation commands should create the views in its correct positions and as such, it won't need the layout to be involved. However, some information used by the layout component may need to be considered (see comments about that below.)
Philip: Exactly, this is how I would understand it too. The command that is returned from the API is a single command, but it's execution would then decompose into several commands that can actually be executed. All of these command executions again rely on the other modules (layout, element type framework, ...).
Antonio: As I said before, no strong opinion is it is a single command or the API actually execute the action. But if the API provides the commands, they must a single command. I also do not have any strong opinion if it should be one compound command or one big monster. But if it is a compound one, I don't want have commands in the chain that counteract the result of the ones previously executed, as it happen in many cases in the current implementation, that policies provides commands to patch or counteract the commands provided for previous policies.
One or several ways to make the diagram pretty. Compacting and expanding space between elements, reordering the lifelines to minimizing the number of crossing between messages and lifelines lines, wrapping labels and moving down the rest of elements to make places when wrapping the label in several lines… That is what I understand by layout the diagram. Not to make it correct after applying some changes in the semantical model, that it will force you to consider a high number of corner cases (as the current implementation). The layout as the logical model should provide the command chain to perform the changes or just perform the changes. In any case in the same fashion than the logical model.
Remi: I see there 2 aspects:
- One that I would call 'Optimization' or 'Rearrange', for the lifeline ordering, compacting space, etc. This one has an optional aspect (even if very useful for the end users!). This would rely on computation of metrics about the representation: how many time 2 lines are crossing, total length of the messages, average length of messages, max length of a message, etc. It is usually triggered by the user because it deletes the user modifications.
- I see the other one as being the basic layout, e.g. tell where the visual representation should be placed in the general picture (as XYLayout). It is based on the defaults from graphical constraints or on specific information given by the user, which may be a tricky aspect to retrieve and maintain.
As far as I understand, this component only deals with improving the layout (optional command), but not ensuring its validity wrt the semantic order, which should be taken care of by the semantic graph, right? So this layout component would be invoked by the logical model API with another set of methods, such as
interaction.removeGaps();
.Antonio: I think the layout should only deal with "optimizations", but without changing the order. One of the things that annoying me the most with the diagram auto layout algorithms is that obsession to minimize line crossing. In a class diagram the classes put in the middle, normally has more relevant and for generalization relation we tend to put the more generic top or left and layout them horizontally or vertically, anything less make the diagram quite confusing. In sequence diagrams the most left lifeline normally is the user who triggers the whole sequence, and most right ones, normally represent a back-end service, database, file access, etc... That are the Semantic Aesthetic principles and normally it is nice to keep them. So I am thinking more like, compacting / expanding the diagram and this kind of things and also maybe be (I have not give it much thoughts) in charge of the total or partial reconstruction of the diagram when the associated notation information is not available or for some reason is violating the semantical order.
Christian: But, the layout problem is also an incremental one, IMHO. Creation of a new semantic element either appended to the bottom of the sequence flow, or inserted somewhere in the midst of it, must maintain this "prettiness" aspect of the diagram in keeping with the entire diagram layout. I doubt that this can be optional. As a user, I would expect that by default my pointer gestures for creation of new elements would indicate only where to place them, not actually how to lay them out: that would be automatic.
All: We need to distinguish between "maintaining" the layout ("keeping it pretty incrementally") and automatically rearranging the layout ("making it pretty in one shot"). Example for the former is to make room for new messages moving all messages below downwards. Example for the latter is to change the order of the lifelines to keep messages crossing lifelines at a minimum. It has to be clarified whether this component contains the logic of the former or the latter. In any case, in the first iterations of the development, we should only focus on maintaining the layout rather than providing rearranging elements.
Antonio: Yes, it is an incremental problem and yes it is not optional. But the question here, is if we need to create a layout mechanism that is incremental? If I create a new message between another two, I expect that there is a gap between there enough to allow me to create the new message. That gap it should be, at least the defined message bottom padding (prev message) + top padding (next message). Once I create the message, the following elements in the diagram need to be shift down by the distance (if any) added between the previous message and the new one and the next message and the new one to keep the padding distance. That "prettiness" it should be actually be implicit in the view creation commands that will create the new message in the proper position and it should provide the commands to shift down the other elements in the diagram. It is the graph structure who knows all the relations and has all the information needed by the layout, so this rearrange after and insertion can easily done as part at the view handling commands. I do not consider it as auto layout. Also I think it will too complex to implement, and to use, a mechanism for the layout to only re-arrange a subset of the elements. I think it is much more simple to implement a layout that just "compact" the whole diagram. Same implementation will "expand" the diagram by proving using higher values for padding as arguments. And that I think would be enough.
That is a API to access defined graphical constraints, padding between different elements, between lifelines, default font sizes, … Anything may have an effect on the graphical location of elements in the diagram and needs to be considered for layout the diagram
Remi: that sounds like a mixture of the SWT Data mechanism for the layouts (as the GridData for the GridLayout) and some user defined information stored in notation. However, i see no link to the notation model, so I may have misunderstood something
Christian: Is this intended to define layout constraints? If so, it needn't actually relate to the Notation Model because it can refer more abstractly to the concepts in the Layout and Semantic Graphic components. These constraints are primarily an input to the layout algorithms. Although, data in the notation model that makes the user's choices about the visualization explicit would also be "constraints" in the general sense, and probably fairly strong constraints.
Antonio: Maybe constraints is not the best name for it. What I meant is that the size of the message is given also, by the size of the label, which is defined by the number of lines is wrapped (if it is) and the line height is given by the font size, and that even is based in the dpi selected in the system. So I was thinking we need a component that deal with all those aspects, calculating those sizes and providing access to the default / user defined values, such padding for messages, lifelines, etc... And also the default graphical styles (fonts, line widths, text wrapping etc...). This information is needed to calculate the width of a lifeline, based on its label, and the visibility of the label parts (type, name, etc...) This will be used by the used by sure by the layout and by the Logical model / graph structure.
(What I have called graph representation). It is a directed graph of “triggering” dependencies. This is used to compute which elements are impacted by an action on a set of elements, as for example: move a message to that place. Taking the example:
We can define the following groups (rectangle areas) base on fragments (nodes) and messages (arrows). The big arrows represent triggering relations between groups, associated to a message.
Now we create a dependency graph base in the picture above. By dependency, I mean, that a fragment / group is affected when editing that fragment:
- The group / fragment is contained in another group (black arrows)
- The group is trigger by a message (blue arrows) or the fragment is connected to another fragment due to a messages
- Other implicit references: Interaction uses / combined fragments marks (start / end marks) (red arrows) Relation from root / diagram node to lifelines. (Not on the picture)
Containment relations are ordered, according to the position in the lifeline (for lifelines) and according to the semantic order for interaction border elements or floating elements (lost & found end points).
Lifeline relationships are ordered by semantical model (lifelines relation in the uml model) and tagged with the x-coordinate and y-coordinate, in case it is created by a create message. Each Node is tagged with the fragment that represent, and the graphical position in the lifeline (y-coordinate) or the point for interaction border items and floating elements, and the notation view if any. This representation, when fully tagged is enough to reconstruct the diagram in the same way the user define it. It also allow easy manipulation of the diagram, say change lifeline position, you know exactly what are the parts affected, same to add / remove / delete messages.
That is my idea to query this data structure from “your builder/my API” to get all the command chain needed to perform a change or to calculate visual feedback for the tool or validate constraints (what it is allowed or not, like send endpoint after receiving endpoint, etc…)
Some advantages I see:
- The structure have all the information to recreate the diagram exactly as the user.
- It can easily be build only with the semantic model and apply the default layout constraints and the notation model, can easily be built nicely.
- The builder / logical model could have a list of constraint to verify a given request, what it makes pretty configurable and decouple from policies and so.
- Different layout algorithm can be implemented, like pack sort of thing, which reduce all the distance between messages / groups / fragment / lifelines to a given distance, etc…
- The structure is quite fast to build. You can build it when first processing the request and kept in the request. Once the commands has been cancelled or executed, you don’t need anymore, and not need to synch with the changes. Next edition request to process, we build it again.
Remi: This graph is a on-the fly computed model of the sequence diagram. Each node of this diagram may have a relation to a Notation element. It nicely reflects the grouping as the end user would see it when he interacts with the element. It is used to compute which elements are impacted on a modification. Each node may contain a position on the lifeline (distance from the top or from the beginning of the lifeline?) or an absolute position if is a "moving" point, like a found/lost message end.
Antonio: Correct
The elements of this model would contain then the data required for layouting the diagram, and the constraints on those elements if there are any (no backward messages for example).
Antonio: About those constraints, no sure, depends which role we give it to it, just read-only structure or we provide some kind of graphics validation mechanism... I will need to think more about it... But yes, that is the idea.
This data structure allows to describe a dependency for move other than 'contained' as defined in usual notation/draw2d. This covers the case of the fragment being 'over' some messages or some other fragments Once the command is executed or not, this data structure may be disposed. This may lead to some issues on performances. As it is build on the fly, there are however no issues on synchronization.
Antonio: Correct. Currently the clipboard data is serialized, deserialized, everytime the context or the edit menu is shown, and that implied a few MB of memory handling... Not sure it will be an issue at all, we can kept in the logical model instance and the logical model in the request. So it is only created once per user interaction...
The dependency graph allows to know easily which elements are impacted by a move, but it may be harder to compute the real changes to be applied. Moving/inserting a message would be translated to a series of Y pixels set commands to all impacted elements, but how would be computed that Y without losing user defined values? Philip: if it is just adjustments, the user-defined values wouldn't be lost, as it is just shifted all together to a certain direction, right? I think in most of the use cases, it is just moving all dependent elements down (e.g. adding a message), or pulling them up (e.g. after a user reduced the space of a message to its previous message, all messages below have to be pulled up to avoid creating a gap). However, there may be cases, I guess, where the user defined values in the layout are lost, as it is unclear sometimes, what the user intended (did s/he intend a gap after a message A when moving a message B down that is right below message A, or was the intend to add a gap before message B)
Antonio: Correct, The graph it is annotated with the notation model, which it contains the graphical positions. If to insert a message you need to move everything 30 pixels down, you just add 30 pixels to the current position for all the element belonging to one of the groups following that message... and all the element in the same group that follow that message. In that way the overall user positions are kept. And yes the complexity is to calculate those changes. For that we could follow to approaches: 1) The logical model calculate the diffs based on the info provided by the graph and the information on the request, 2) The graph structure calculate that information as set in each node in the graph, so it has new the y-positions and as it has the pointer to the view, which have the current position, just iterating through the structure, you can see if there is a change by comparing the y-pos in the graph nodes with the positions contained in the view. It is due to 2) why the graph may need to have access to the GraphicalConstraints components.
Christian: I think that storing relative position information in the persisted notation, in both x- and y-coördinates, would help to make it clear what is the user intention. For example, extra space drawn by the user in either dimension is clearly expressed as a larger relative value (delta) than usual. The dependency graph makes it abundantly clear what the position of an object is measured relative to: the vertex at the source end of an edge. If the dependency graph is complete, so that every vertex has an incoming edge from the x-dimension dependency and another from the y-dimension dependency, then relative positioning will never lose any layout information. I suppose that optimizations could be made in the case of relative zero positions, such as the x distance from a lifeline to a message end on that lifeline, by omitting those edges in the dependency graph, but it's probably too early for such considerations.
Relative positioning could also help the problem identified by Rémi of the impact of insertion of elements into the midst of the layout: they insert new vertices and edges into the dependency graph, but existing edges don't change their weights (being the relative positions), so the layout of the rest of the diagram isn't affected and doesn't have to be calculated until (and if) it is redrawn. Antonio: As the relative positions it may seem favorable has a number of drawbacks. Whenever a fragment / message is deleted or moved, all the relative positions needs to be recalculated, in the affected lifelines, and probably in all the other ones as the messages relative position is based on two anchor points. As in general all of them are need to be recalculated in almost any operation, it looks to me they won't provide any over the absolute ones. While handling the absolute ones is more simple, and it does not to deal with potential conflict with the sematical order when moving things around.
Christian: Find an initial draft of how a Dependency Graph API might look and work in the dependency-graph branch. Especially the
ExampleVerticalRepositionVisitor
class in the tests, which illustrates computation of changes down the sequence diagram layout from some change higher up. This is tracked in Issue #15.
All: introduce own context to avoid being contaminated by extensible element types framework. Still we need to introduce selectively the ones that we want to keep (control mode; stereotype applications).
All: Open question: which component is responsible for the semantic model changes; is it only the logical model or the semantic graph. In our view, the semantic graph is a pure data structure to distill information to derive what to update in the view.
All: Edits should not be distributed but only the responsibility of one single component.
All: relative positions are favorable (for EMF Compare and also for the layout).
Antonio: I disagree, they won't add much value and will force to far much complex handling in the logical model and make the whole thing more sensitive to information lost.
Philip: But we then have to accept that we may loose certain graphical-only modifications during diff-merge, as it may be hard to reliably derive graphical-only changes from the absolute values in a few cases. Papyrus Compare can be customized to re-use the logical model when merging changes so that the graphical and semantical order are kept consistent. But aside from that, the customization has to be created specifically for Papyrus Compare, e.g. for separating graphical differences that have been an implicit consequence of another change from those that have been explicitly applied by the user, handling conflicts among implicitly and explicitly applied graphical changes, etc.
All: default should be 0, as it is the default for EMF (and not be serialized) and not -1 as interpreted by GMF.