Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

route53 collection attempt #7

Merged
merged 4 commits into from
Feb 21, 2013
Merged

route53 collection attempt #7

merged 4 commits into from
Feb 21, 2013

Conversation

ralph-tice
Copy link
Contributor

The workflow I'm trying to enable is searching hosted zones for specific loadbalancer or elastic IP entries.

http://localhost:8080/edda/api/v2/aws/hostedZones/
and
http://localhost:8080/edda/api/v2/aws/hostedZones/

I took a stab at pulling in the recordSets but couldn't quite figure out how to smash them together into one map in Scala. I'll take another stab at this but figured I'd put my progress out there for C&C.

Google Groups discussion reference: https://groups.google.com/forum/?fromgroups=#!topic/edda-users/CZN5xp506g8

@coryb
Copy link
Contributor

coryb commented Feb 19, 2013

Looks good so far.

override
def doCrawl() = ctx.awsClient.route53.listHostedZones(request).getHostedZones.asScala.map(
item => Record(item.getId, ctx.beanMapper(item))
).toSeq

In that I would chagne the item.getId to item.getName so we can use the human-readable form to get the records lilke:
GET /edda/v2/aws/hostedZone/route53.mycompany.net.
instead of
GET /edda/v2/aws/hostedZone/X28UDUEE37NFMQ

Also, it looks like the "wip" is trying to merge in the records to the same collection. Perhaps it would be better/easier to create a "hostedRecords" collection to track the zone entries as individual documents.

Thanks
-Cory

@ralph-tice
Copy link
Contributor Author

Thanks for the comments! I thought it would be more useful to have all the hosted records come up when you pull up a zone, but I think I see what you're getting at and I agree. That should help for tracking changes and searching across multiple zones... I'll hack away at this some more.

… dependent crawler of the hostedZone crawler
@ralph-tice
Copy link
Contributor Author

All the data is there! Sort of.

I tried using name as the ID, but I need the hosted zone ID to pass into the ListResourceRecordSetsRequest in my dependent crawler. on AwsCrawlers.scala:645 i have: zone.copy(data = Map("zone" -> zone.toMap, "resourceRecordSets" -> resourceRecordSets.toString))

which gets me all the data that we should have, but the json coming out of toString is getting escaped in the resulting recordset, and copying the zone probably isn't the best way to create the record, but I couldn't quite figure out how to manipulate collections in Scala this evening. I want to get at least the ResourceRecordSets as distinct records before I'm through, but I need to tie them together by zoneID for retrieval, I think?

* flatten hostedRecords set to track individual records
  * modify original record to include zone id/name to aid tracking relationships
* set hostedZones id to be record name
@coryb
Copy link
Contributor

coryb commented Feb 21, 2013

I made some changes and created a pull request for your branch:
ralph-tice#1

Please review merge (or modify if they patch does not meet your needs).
Thanks
-Cory

tweaks to route53 patch
coryb added a commit that referenced this pull request Feb 21, 2013
route53 collection attempt
@coryb coryb merged commit 1f02c1b into Netflix:master Feb 21, 2013
@ralph-tice
Copy link
Contributor Author

Looks like you've 100%'d it Cory, I tested this locally the functionality looks good.

There is a bug that this exposes, though. Some DNS records can have names like *.domain.com or @.domain.com and they get encoded as \052.domain.com or \100.domain.com and I can't seem to craft an appropriate URL to retrieve these. I'll open up another PR if I figure out how to address this. I think correct behavior would be to expect them to be stored natively and * passed through on the URL unchanged and @ passed through as %40. I suspect this is related to the values being escaped for JSON encoding.

@coryb
Copy link
Contributor

coryb commented Feb 21, 2013

Interesting, I didnt run across any domain records like that in my testing. I would expect it work with %2a and %40 in the uri like you mentioned. Perhaps it is a JSON issue, but I cant imagine why it would encode ascii characters to octal. Since I can't reproduce it I look forward to hearing what you find.

Thanks,
-Cory

@ralph-tice
Copy link
Contributor Author

They're SPF records, which I'm using for Amazon SES.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants